[GitHub] [flink] flinkbot edited a comment on pull request #18399: [FLINK-25118][streaming] Add vertex topology index prefix in vertex name
flinkbot edited a comment on pull request #18399: URL: https://github.com/apache/flink/pull/18399#issuecomment-1016036536 ## CI report: * 1b59e0775a0fc8288e915036f8ba7a658069881d Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29676) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-25683) wrong result if table transfrom to DataStream then window process in batch mode
[ https://issues.apache.org/jira/browse/FLINK-25683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17478384#comment-17478384 ] Martijn Visser commented on FLINK-25683: [~paul8263] I've assigned it to you. > wrong result if table transfrom to DataStream then window process in batch > mode > --- > > Key: FLINK-25683 > URL: https://issues.apache.org/jira/browse/FLINK-25683 > Project: Flink > Issue Type: Bug > Components: Table SQL / API, Table SQL / Runtime >Affects Versions: 1.14.2 > Environment: mac book pro m1 > jdk 8 > scala 2.11 > flink 1.14.2 > idea 2020 >Reporter: zhangzh >Assignee: Yao Zhang >Priority: Major > Attachments: TableToDataStreamBatchWindowTest.scala, pom.xml > > > I have 5 line datas, > i first need to transform current data with SQL > then mix current data and historical data which is batch get from hbase > for some special reason the program must run in batch mode > i think the correct result should be like this: > (BOB,1) > (EMA,1) > (DOUG,1) > (ALICE,1) > (CENDI,1) > but the result is : > (EMA,1) > > if i set different parallelism ,the result is different. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (FLINK-25683) wrong result if table transfrom to DataStream then window process in batch mode
[ https://issues.apache.org/jira/browse/FLINK-25683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Martijn Visser reassigned FLINK-25683: -- Assignee: Yao Zhang > wrong result if table transfrom to DataStream then window process in batch > mode > --- > > Key: FLINK-25683 > URL: https://issues.apache.org/jira/browse/FLINK-25683 > Project: Flink > Issue Type: Bug > Components: Table SQL / API, Table SQL / Runtime >Affects Versions: 1.14.2 > Environment: mac book pro m1 > jdk 8 > scala 2.11 > flink 1.14.2 > idea 2020 >Reporter: zhangzh >Assignee: Yao Zhang >Priority: Major > Attachments: TableToDataStreamBatchWindowTest.scala, pom.xml > > > I have 5 line datas, > i first need to transform current data with SQL > then mix current data and historical data which is batch get from hbase > for some special reason the program must run in batch mode > i think the correct result should be like this: > (BOB,1) > (EMA,1) > (DOUG,1) > (ALICE,1) > (CENDI,1) > but the result is : > (EMA,1) > > if i set different parallelism ,the result is different. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] slinkydeveloper commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding
slinkydeveloper commented on a change in pull request #18290: URL: https://github.com/apache/flink/pull/18290#discussion_r787442155 ## File path: flink-connectors/flink-connector-hbase-2.2/src/main/java/org/apache/flink/connector/hbase2/HBase2DynamicTableFactory.java ## @@ -136,4 +133,20 @@ public String factoryIdentifier() { set.add(LOOKUP_MAX_RETRIES); return set; } + +@Override +public Set> forwardOptions() { Review comment: I think we should do it in another PR, as it's not trivial, because we also need to figure out how to include the generated docs within the docs engine -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-25694) GSON/Alluxio Vulnerability
[ https://issues.apache.org/jira/browse/FLINK-25694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Martijn Visser updated FLINK-25694: --- Issue Type: Technical Debt (was: Bug) > GSON/Alluxio Vulnerability > -- > > Key: FLINK-25694 > URL: https://issues.apache.org/jira/browse/FLINK-25694 > Project: Flink > Issue Type: Technical Debt > Components: FileSystems >Affects Versions: 1.14.2 >Reporter: David Perkins >Priority: Major > > GSON has a bug, which was fixed in 2.8.9, see > [https://github.com/google/gson/pull/1991.] This results in the possibility > for DOS attacks. > GSON is included in the `flink-s3-fs-presto` plugin, because Alluxio includes > it in their shaded client. I've opened an issue in Alluxio: > [https://github.com/Alluxio/alluxio/issues/14868.] When that is fixed, the > plugin also needs to be updated. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-25614) Let LocalWindowAggregate be chained with upstream
[ https://issues.apache.org/jira/browse/FLINK-25614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17478386#comment-17478386 ] Martijn Visser commented on FLINK-25614: [~lmagics] Given that the Flink 1.15 release branch will be cut in the beginning of February, do you think it could be fixed before that branch is cut? > Let LocalWindowAggregate be chained with upstream > - > > Key: FLINK-25614 > URL: https://issues.apache.org/jira/browse/FLINK-25614 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Runtime >Affects Versions: 1.14.2 >Reporter: Q Kang >Assignee: Q Kang >Priority: Minor > Labels: pull-request-available > > When enabling two-phase aggregation (local-global) strategy for Window TVF, > the physical plan is shown as follows: > {code:java} > TableSourceScan -> Calc -> WatermarkAssigner -> Calc > || > || [FORWARD] > || > LocalWindowAggregate > || > || [HASH] > || > GlobalWindowAggregate > || > || > ...{code} > We can let the `LocalWindowAggregate` node be chained with upstream operators > in order to improve efficiency, just like the non-windowing counterpart > `LocalGroupAggregate`. > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-25562) hive sql connector dependency conflicts
[ https://issues.apache.org/jira/browse/FLINK-25562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17478387#comment-17478387 ] Martijn Visser commented on FLINK-25562: [~luoyuxia] Thank you! > hive sql connector dependency conflicts > --- > > Key: FLINK-25562 > URL: https://issues.apache.org/jira/browse/FLINK-25562 > Project: Flink > Issue Type: Bug > Components: Connectors / Hive >Affects Versions: 1.13.3 > Environment: CDH:5.15.0 > hadoop:2.6.0 > hive:1.1.0 > flink:1.3.3 >Reporter: ziqiang.wang >Assignee: luoyuxia >Priority: Major > Labels: easyfix > Fix For: 1.13.3 > > Attachments: image-2022-01-07-14-54-37-874.png, > image-2022-01-13-16-08-20-176.png, image-2022-01-13-16-18-14-830.png, > image-2022-01-13-16-21-09-495.png > > Original Estimate: 12h > Remaining Estimate: 12h > > [https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-hive-1.2.2_2.11/1.13.5/flink-sql-connector-hive-1.2.2_2.11-1.13.5.jar] > Flink jar flink-sql-connector-hive-1.2.2_2.11.jar are not compatible with > Hive 1.1.0. > I cloned the source code of Flink, and then modified the version of Hive to > 1.1.0 in the POM file of Hive-sql-connector. Then I repackaged > Flink-SQL-connector-Hive, and put it online, so there was no version conflict > problem. > Version conflicts mainly occur in the Hive-exe.jar package. > For conflicts details, see the following figure. > !image-2022-01-07-14-54-37-874.png! > I hope the official website can provided other packaged jar for hive 1.1.x。 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Comment Edited] (FLINK-25684) Support enhanced show databases syntax
[ https://issues.apache.org/jira/browse/FLINK-25684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17477583#comment-17477583 ] Moses edited comment on FLINK-25684 at 1/19/22, 8:09 AM: - [~jark] Could you please help to check this issue ~ was (Author: zhangchaoming): cc [~jark] > Support enhanced show databases syntax > -- > > Key: FLINK-25684 > URL: https://issues.apache.org/jira/browse/FLINK-25684 > Project: Flink > Issue Type: Improvement > Components: Table SQL / API >Reporter: Moses >Priority: Major > Labels: pull-request-available > > Enhanced `show databases` statement like ` show databasesfrom like 'db%' ` > has been supported broadly in many popular SQL engine like Spark SQL/MySQL. > We could use such statement to easily show the databases that we wannted. > h3. SHOW DATABSES [ LIKE regex_pattern ] > Examples: > {code:java} > Flink SQL> create database db1; > [INFO] Execute statement succeed. > Flink SQL> create database db1_1; > [INFO] Execute statement succeed. > Flink SQL> create database pre_db; > [INFO] Execute statement succeed. > Flink SQL> show databases; > +--+ > |database name | > +--+ > | default_database | > | db1 | > |db1_1 | > | pre_db | > +--+ > 4 rows in set > Flink SQL> show databases like 'db1'; > +---+ > | database name | > +---+ > | db1 | > +---+ > 1 row in set > Flink SQL> show databases like 'db%'; > +---+ > | database name | > +---+ > | db1 | > | db1_1 | > +---+ > 2 rows in set > Flink SQL> show databases like '%db%'; > +---+ > | database name | > +---+ > | db1 | > | db1_1 | > |pre_db | > +---+ > 3 rows in set > Flink SQL> show databases like '%db'; > +---+ > | database name | > +---+ > |pre_db | > +---+ > 1 row in set > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] flinkbot edited a comment on pull request #18169: [FLINK-25277] add shutdown hook to stop TaskExecutor on SIGTERM
flinkbot edited a comment on pull request #18169: URL: https://github.com/apache/flink/pull/18169#issuecomment-998925832 ## CI report: * 35011a0fb8ca36b38d854e88a2937357b8736f4d UNKNOWN * fc7193f9336b272ced363c100517ad4f4f793804 UNKNOWN * b220c66e7dd126f894235bc66b8fe5ba8b56c89b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29665) * e27a3578963f34f97398efab3e1b0e182e9d6198 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-25694) GSON/Alluxio Vulnerability
[ https://issues.apache.org/jira/browse/FLINK-25694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Martijn Visser updated FLINK-25694: --- Description: GSON has a bug, which was fixed in 2.8.9, see [https://github.com/google/gson/pull/1991|https://github.com/google/gson/pull/1991.] This results in the possibility for DOS attacks. GSON is included in the `flink-s3-fs-presto` plugin, because Alluxio includes it in their shaded client. I've opened an issue in Alluxio: [https://github.com/Alluxio/alluxio/issues/14868.] When that is fixed, the plugin also needs to be updated. was: GSON has a bug, which was fixed in 2.8.9, see [https://github.com/google/gson/pull/1991.] This results in the possibility for DOS attacks. GSON is included in the `flink-s3-fs-presto` plugin, because Alluxio includes it in their shaded client. I've opened an issue in Alluxio: [https://github.com/Alluxio/alluxio/issues/14868.] When that is fixed, the plugin also needs to be updated. > GSON/Alluxio Vulnerability > -- > > Key: FLINK-25694 > URL: https://issues.apache.org/jira/browse/FLINK-25694 > Project: Flink > Issue Type: Technical Debt > Components: FileSystems >Affects Versions: 1.14.2 >Reporter: David Perkins >Priority: Major > > GSON has a bug, which was fixed in 2.8.9, see > [https://github.com/google/gson/pull/1991|https://github.com/google/gson/pull/1991.] > This results in the possibility for DOS attacks. > GSON is included in the `flink-s3-fs-presto` plugin, because Alluxio includes > it in their shaded client. I've opened an issue in Alluxio: > [https://github.com/Alluxio/alluxio/issues/14868.] When that is fixed, the > plugin also needs to be updated. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-25694) GSON/Alluxio Vulnerability
[ https://issues.apache.org/jira/browse/FLINK-25694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17478398#comment-17478398 ] Martijn Visser commented on FLINK-25694: [~davidnperkins] Was there a CVE number for this? > GSON/Alluxio Vulnerability > -- > > Key: FLINK-25694 > URL: https://issues.apache.org/jira/browse/FLINK-25694 > Project: Flink > Issue Type: Technical Debt > Components: Connectors / FileSystem, FileSystems >Affects Versions: 1.14.2 >Reporter: David Perkins >Priority: Major > > GSON has a bug, which was fixed in 2.8.9, see > [https://github.com/google/gson/pull/1991|https://github.com/google/gson/pull/1991.] > This results in the possibility for DOS attacks. > GSON is included in the `flink-s3-fs-presto` plugin, because Alluxio includes > it in their shaded client. I've opened an issue in Alluxio: > [https://github.com/Alluxio/alluxio/issues/14868.] When that is fixed, the > plugin also needs to be updated. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (FLINK-25694) GSON/Alluxio Vulnerability
[ https://issues.apache.org/jira/browse/FLINK-25694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Martijn Visser updated FLINK-25694: --- Component/s: Connectors / FileSystem > GSON/Alluxio Vulnerability > -- > > Key: FLINK-25694 > URL: https://issues.apache.org/jira/browse/FLINK-25694 > Project: Flink > Issue Type: Technical Debt > Components: Connectors / FileSystem, FileSystems >Affects Versions: 1.14.2 >Reporter: David Perkins >Priority: Major > > GSON has a bug, which was fixed in 2.8.9, see > [https://github.com/google/gson/pull/1991|https://github.com/google/gson/pull/1991.] > This results in the possibility for DOS attacks. > GSON is included in the `flink-s3-fs-presto` plugin, because Alluxio includes > it in their shaded client. I've opened an issue in Alluxio: > [https://github.com/Alluxio/alluxio/issues/14868.] When that is fixed, the > plugin also needs to be updated. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] twalthr commented on a change in pull request #18349: [FLINK-25609][table] Anonymous/inline tables don't require ObjectIdentifier anymore
twalthr commented on a change in pull request #18349: URL: https://github.com/apache/flink/pull/18349#discussion_r786804756 ## File path: flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/ContextResolvedTable.java ## @@ -0,0 +1,190 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.catalog; + +import org.apache.flink.annotation.Internal; +import org.apache.flink.table.api.ValidationException; +import org.apache.flink.table.factories.FactoryUtil; +import org.apache.flink.util.Preconditions; + +import javax.annotation.Nullable; + +import java.util.Map; +import java.util.Optional; +import java.util.concurrent.atomic.AtomicInteger; + +/** + * This class contains information about a table, its {@link ResolvedSchema}, its options and its + * relationship with a {@link Catalog}, if any. + * + * There can be 3 kinds of {@link ContextResolvedTable}: + * + * + * A permanent table: a table which is stored in a {@link Catalog} and has an associated + * unique {@link ObjectIdentifier}. + * A temporary table: a table which is stored in the {@link CatalogManager}, has an associated + * unique {@link ObjectIdentifier} and is flagged as temporary. + * An anonymous/inline table: a table which is not stored in a catalog and doesn't have an + * associated unique {@link ObjectIdentifier}. + * + * + * The different handling of temporary and permanent tables is {@link Catalog} and {@link + * CatalogManager} instance specific, hence for these two kind of tables, an instance of this object + * represents the relationship between the specific {@link ResolvedCatalogBaseTable} instance and + * the specific {@link Catalog}/{@link CatalogManager} instances. For example, the same {@link + * ResolvedCatalogBaseTable} can be temporary for one catalog, but permanent for another one. + */ +@Internal +public class ContextResolvedTable { + +private static final AtomicInteger uniqueId = new AtomicInteger(0); + +private final ObjectIdentifier objectIdentifier; +private final @Nullable Catalog catalog; +private final ResolvedCatalogBaseTable resolvedTable; +private final boolean anonymous; + +public static ContextResolvedTable permanent( +ObjectIdentifier identifier, +Catalog catalog, +ResolvedCatalogBaseTable resolvedTable) { +return new ContextResolvedTable( +identifier, Preconditions.checkNotNull(catalog), resolvedTable, false); +} + +public static ContextResolvedTable temporary( +ObjectIdentifier identifier, ResolvedCatalogBaseTable resolvedTable) { +return new ContextResolvedTable(identifier, null, resolvedTable, false); +} + +public static ContextResolvedTable anonymous(ResolvedCatalogBaseTable resolvedTable) { +return new ContextResolvedTable( +ObjectIdentifier.ofAnonymous( +generateAnonymousStringIdentifier(null, resolvedTable)), +null, +resolvedTable, +true); +} + +public static ContextResolvedTable anonymous( +String hint, ResolvedCatalogBaseTable resolvedTable) { +return new ContextResolvedTable( +ObjectIdentifier.ofAnonymous( +generateAnonymousStringIdentifier(hint, resolvedTable)), +null, +resolvedTable, +true); +} + +private ContextResolvedTable( +ObjectIdentifier objectIdentifier, +@Nullable Catalog catalog, +ResolvedCatalogBaseTable resolvedTable, +boolean anonymous) { +this.objectIdentifier = Preconditions.checkNotNull(objectIdentifier); +this.catalog = catalog; +this.resolvedTable = Preconditions.checkNotNull(resolvedTable); +this.anonymous = anonymous; +} + +public boolean isAnonymous() { +return this.anonymous; +} + +/** @return true if the table is temporary. An anonymous table is always temporary. */ +public boolean isTemporary() { +return catalog == null; +} + +public boolean isPerman
[GitHub] [flink] maosuhan commented on pull request #14376: [FLINK-18202][PB format] New Format of protobuf
maosuhan commented on pull request #14376: URL: https://github.com/apache/flink/pull/14376#issuecomment-1016185722 > @maosuhan hi, flink-protobuf works very well in my job, but i found some corner case that it was not supported `oneof` in proto3. so this will be dealt in future or not ? Thanks for your feedback. Could you provide your proto definition? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18119: [FLINK-24947] Support hostNetwork for native K8s integration on session mode
flinkbot edited a comment on pull request #18119: URL: https://github.com/apache/flink/pull/18119#issuecomment-994734000 ## CI report: * 99653eb7e622b96ba0bb47836fe3eefe7b71d942 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29677) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18169: [FLINK-25277] add shutdown hook to stop TaskExecutor on SIGTERM
flinkbot edited a comment on pull request #18169: URL: https://github.com/apache/flink/pull/18169#issuecomment-998925832 ## CI report: * 35011a0fb8ca36b38d854e88a2937357b8736f4d UNKNOWN * fc7193f9336b272ced363c100517ad4f4f793804 UNKNOWN * b220c66e7dd126f894235bc66b8fe5ba8b56c89b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29665) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18388: [FLINK-25530][python][connector/pulsar] Support Pulsar source connector in Python DataStream API.
flinkbot edited a comment on pull request #18388: URL: https://github.com/apache/flink/pull/18388#issuecomment-1015205179 ## CI report: * 76134252d18e14772c18739278af72c9b519e353 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29687) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18394: [FLINK-25520][Table SQL/API] Implement "ALTER TABLE ... COMPACT" SQL
flinkbot edited a comment on pull request #18394: URL: https://github.com/apache/flink/pull/18394#issuecomment-1015323011 ## CI report: * 455dfd8ad2146f35d31a602715a9b27a07aa2aba Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29678) * cc5eaf1dd5be0414a77c59f619e407c01d083e0c Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29685) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-25683) wrong result if table transfrom to DataStream then window process in batch mode
[ https://issues.apache.org/jira/browse/FLINK-25683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17478400#comment-17478400 ] Timo Walther commented on FLINK-25683: -- Thanks for investigating [~paul8263]. I will loop in [~dwysakowicz] who knows this part of the code better. > wrong result if table transfrom to DataStream then window process in batch > mode > --- > > Key: FLINK-25683 > URL: https://issues.apache.org/jira/browse/FLINK-25683 > Project: Flink > Issue Type: Bug > Components: Table SQL / API, Table SQL / Runtime >Affects Versions: 1.14.2 > Environment: mac book pro m1 > jdk 8 > scala 2.11 > flink 1.14.2 > idea 2020 >Reporter: zhangzh >Assignee: Yao Zhang >Priority: Major > Attachments: TableToDataStreamBatchWindowTest.scala, pom.xml > > > I have 5 line datas, > i first need to transform current data with SQL > then mix current data and historical data which is batch get from hbase > for some special reason the program must run in batch mode > i think the correct result should be like this: > (BOB,1) > (EMA,1) > (DOUG,1) > (ALICE,1) > (CENDI,1) > but the result is : > (EMA,1) > > if i set different parallelism ,the result is different. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] slinkydeveloper commented on a change in pull request #18349: [FLINK-25609][table] Anonymous/inline tables don't require ObjectIdentifier anymore
slinkydeveloper commented on a change in pull request #18349: URL: https://github.com/apache/flink/pull/18349#discussion_r787453251 ## File path: flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/FactoryUtil.java ## @@ -157,7 +157,7 @@ public static DynamicTableSource createDynamicTableSource( } catch (Throwable t) { throw new ValidationException( String.format( -"Unable to create a source for reading table '%s'.\n\n" +"Unable to create a source for reading the table '%s'.\n\n" Review comment: What did you meant with this comment then? https://github.com/apache/flink/pull/18349#discussion_r784698670 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18169: [FLINK-25277] add shutdown hook to stop TaskExecutor on SIGTERM
flinkbot edited a comment on pull request #18169: URL: https://github.com/apache/flink/pull/18169#issuecomment-998925832 ## CI report: * 35011a0fb8ca36b38d854e88a2937357b8736f4d UNKNOWN * fc7193f9336b272ced363c100517ad4f4f793804 UNKNOWN * b220c66e7dd126f894235bc66b8fe5ba8b56c89b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29665) * e27a3578963f34f97398efab3e1b0e182e9d6198 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zhuzhengjun01 commented on pull request #14376: [FLINK-18202][PB format] New Format of protobuf
zhuzhengjun01 commented on pull request #14376: URL: https://github.com/apache/flink/pull/14376#issuecomment-1016191275 > like this , `syntax = "proto3"; package org.apache.flink.formats.protobuf.proto; option java_multiple_files = false; message KafkaSinkTestTT { oneof name_oneof { string name = 1; } oneof num_oneof { int32 num = 2; } oneof tt_oneof { int64 xtime = 3; } ro row1 = 4; message ro{ string a = 1; int32 b = 2; } } ` actually, I want to distinguish between default and missing values in proto3 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zhuzhengjun01 edited a comment on pull request #14376: [FLINK-18202][PB format] New Format of protobuf
zhuzhengjun01 edited a comment on pull request #14376: URL: https://github.com/apache/flink/pull/14376#issuecomment-1016191275 > like this , `syntax = 'proto3'; package org.apache.flink.formats.protobuf.proto; option java_multiple_files = false; message KafkaSinkTestTT { oneof name_oneof { string name = 1; } oneof num_oneof { int32 num = 2; } oneof tt_oneof { int64 xtime = 3; } ro row1 = 4; message ro{ string a = 1; int32 b = 2; } } ` actually, I want to distinguish between default and missing values in proto3 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zhuzhengjun01 edited a comment on pull request #14376: [FLINK-18202][PB format] New Format of protobuf
zhuzhengjun01 edited a comment on pull request #14376: URL: https://github.com/apache/flink/pull/14376#issuecomment-1016191275 > like this , ` syntax = proto3; package org.apache.flink.formats.protobuf.proto; option java_multiple_files = false; message KafkaSinkTestTT { oneof name_oneof { string name = 1; } oneof num_oneof { int32 num = 2; } oneof tt_oneof { int64 xtime = 3; } ro row1 = 4; message ro{ string a = 1; int32 b = 2; } } ` actually, I want to distinguish between default and missing values in proto3 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] slinkydeveloper commented on a change in pull request #18349: [FLINK-25609][table] Anonymous/inline tables don't require ObjectIdentifier anymore
slinkydeveloper commented on a change in pull request #18349: URL: https://github.com/apache/flink/pull/18349#discussion_r787459410 ## File path: flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/ContextResolvedTable.java ## @@ -0,0 +1,190 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.catalog; + +import org.apache.flink.annotation.Internal; +import org.apache.flink.table.api.ValidationException; +import org.apache.flink.table.factories.FactoryUtil; +import org.apache.flink.util.Preconditions; + +import javax.annotation.Nullable; + +import java.util.Map; +import java.util.Optional; +import java.util.concurrent.atomic.AtomicInteger; + +/** + * This class contains information about a table, its {@link ResolvedSchema}, its options and its + * relationship with a {@link Catalog}, if any. + * + * There can be 3 kinds of {@link ContextResolvedTable}: + * + * + * A permanent table: a table which is stored in a {@link Catalog} and has an associated + * unique {@link ObjectIdentifier}. + * A temporary table: a table which is stored in the {@link CatalogManager}, has an associated + * unique {@link ObjectIdentifier} and is flagged as temporary. + * An anonymous/inline table: a table which is not stored in a catalog and doesn't have an + * associated unique {@link ObjectIdentifier}. + * + * + * The different handling of temporary and permanent tables is {@link Catalog} and {@link + * CatalogManager} instance specific, hence for these two kind of tables, an instance of this object + * represents the relationship between the specific {@link ResolvedCatalogBaseTable} instance and + * the specific {@link Catalog}/{@link CatalogManager} instances. For example, the same {@link + * ResolvedCatalogBaseTable} can be temporary for one catalog, but permanent for another one. + */ +@Internal +public class ContextResolvedTable { + +private static final AtomicInteger uniqueId = new AtomicInteger(0); + +private final ObjectIdentifier objectIdentifier; +private final @Nullable Catalog catalog; +private final ResolvedCatalogBaseTable resolvedTable; +private final boolean anonymous; + +public static ContextResolvedTable permanent( +ObjectIdentifier identifier, +Catalog catalog, +ResolvedCatalogBaseTable resolvedTable) { +return new ContextResolvedTable( +identifier, Preconditions.checkNotNull(catalog), resolvedTable, false); +} + +public static ContextResolvedTable temporary( +ObjectIdentifier identifier, ResolvedCatalogBaseTable resolvedTable) { +return new ContextResolvedTable(identifier, null, resolvedTable, false); +} + +public static ContextResolvedTable anonymous(ResolvedCatalogBaseTable resolvedTable) { +return new ContextResolvedTable( +ObjectIdentifier.ofAnonymous( +generateAnonymousStringIdentifier(null, resolvedTable)), +null, +resolvedTable, +true); +} + +public static ContextResolvedTable anonymous( +String hint, ResolvedCatalogBaseTable resolvedTable) { +return new ContextResolvedTable( +ObjectIdentifier.ofAnonymous( +generateAnonymousStringIdentifier(hint, resolvedTable)), +null, +resolvedTable, +true); +} + +private ContextResolvedTable( +ObjectIdentifier objectIdentifier, +@Nullable Catalog catalog, +ResolvedCatalogBaseTable resolvedTable, +boolean anonymous) { +this.objectIdentifier = Preconditions.checkNotNull(objectIdentifier); +this.catalog = catalog; +this.resolvedTable = Preconditions.checkNotNull(resolvedTable); +this.anonymous = anonymous; +} + +public boolean isAnonymous() { +return this.anonymous; +} + +/** @return true if the table is temporary. An anonymous table is always temporary. */ +public boolean isTemporary() { +return catalog == null; +} + +public boolean
[GitHub] [flink] flinkbot edited a comment on pull request #18169: [FLINK-25277] add shutdown hook to stop TaskExecutor on SIGTERM
flinkbot edited a comment on pull request #18169: URL: https://github.com/apache/flink/pull/18169#issuecomment-998925832 ## CI report: * 35011a0fb8ca36b38d854e88a2937357b8736f4d UNKNOWN * fc7193f9336b272ced363c100517ad4f4f793804 UNKNOWN * b220c66e7dd126f894235bc66b8fe5ba8b56c89b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29665) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] slinkydeveloper commented on a change in pull request #18349: [FLINK-25609][table] Anonymous/inline tables don't require ObjectIdentifier anymore
slinkydeveloper commented on a change in pull request #18349: URL: https://github.com/apache/flink/pull/18349#discussion_r787463301 ## File path: flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/StatementSet.java ## @@ -62,10 +62,10 @@ * written to a table (backed by a {@link DynamicTableSink}) expressed via the given {@link * TableDescriptor}. * - * The given {@link TableDescriptor descriptor} is registered as an inline (i.e. anonymous) - * temporary catalog table (see {@link TableEnvironment#createTemporaryTable(String, - * TableDescriptor)}. Then a statement is added to the statement set that inserts the {@link - * Table} object's pipeline into that temporary table. + * The given {@link TableDescriptor descriptor} won't be registered in the catalog, but it + * will be propagated directly in the operation tree, adding a statement to the statement set Review comment: Reworded, check now -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18169: [FLINK-25277] add shutdown hook to stop TaskExecutor on SIGTERM
flinkbot edited a comment on pull request #18169: URL: https://github.com/apache/flink/pull/18169#issuecomment-998925832 ## CI report: * 35011a0fb8ca36b38d854e88a2937357b8736f4d UNKNOWN * fc7193f9336b272ced363c100517ad4f4f793804 UNKNOWN * b220c66e7dd126f894235bc66b8fe5ba8b56c89b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29665) * e27a3578963f34f97398efab3e1b0e182e9d6198 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] XComp commented on a change in pull request #18189: [FLINK-25430] Replace RunningJobRegistry by JobResultStore
XComp commented on a change in pull request #18189: URL: https://github.com/apache/flink/pull/18189#discussion_r787467634 ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/Dispatcher.java ## @@ -943,15 +945,26 @@ protected CleanupJobState jobReachedTerminalState(ExecutionGraphInfo executionGr archiveExecutionGraph(executionGraphInfo); if (terminalJobStatus.isGloballyTerminalState()) { +final JobID jobId = executionGraphInfo.getJobId(); try { -jobResultStore.createDirtyResult( - JobResult.createFrom(executionGraphInfo.getArchivedExecutionGraph())); +if (jobResultStore.hasCleanJobResultEntry(jobId)) { +log.warn( Review comment: I added it because it indicates that something strange is going on in the `JobResultStore` considering that the user could modify files in the directory of the file-based implementation. In general, this warning should never pop up. But if somebody did something wrong with file names, Flink might store the wrong state under that job. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] dawidwys commented on pull request #18368: [FLINK-18356][Azure] Enforce memory limits for docker containers
dawidwys commented on pull request #18368: URL: https://github.com/apache/flink/pull/18368#issuecomment-1016200608 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18169: [FLINK-25277] add shutdown hook to stop TaskExecutor on SIGTERM
flinkbot edited a comment on pull request #18169: URL: https://github.com/apache/flink/pull/18169#issuecomment-998925832 ## CI report: * 35011a0fb8ca36b38d854e88a2937357b8736f4d UNKNOWN * fc7193f9336b272ced363c100517ad4f4f793804 UNKNOWN * b220c66e7dd126f894235bc66b8fe5ba8b56c89b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29665) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] XComp commented on a change in pull request #18189: [FLINK-25430] Replace RunningJobRegistry by JobResultStore
XComp commented on a change in pull request #18189: URL: https://github.com/apache/flink/pull/18189#discussion_r787470497 ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/JobResultEntry.java ## @@ -22,41 +22,18 @@ import org.apache.flink.util.Preconditions; /** - * An entry in a {@link JobResultStore} that couples a completed {@link JobResult} to a state that - * represents whether the resources of that JobResult have been finalized ({@link - * JobResultState#CLEAN}) or have yet to be finalized ({@link JobResultState#DIRTY}). + * {@code JobResultEntry} is the entity managed by the {@link JobResultStore}. It collects + * information about a globally terminated job (e.g. {@link JobResult}). */ -public class JobResultEntry { +public class JobResultEntry implements WithJobResult { Review comment: Initially, I though that it would be helpful for the file-based implementation (where we want to add the serializer version to persisted record). But looks like inheritance is good enough in that case. The interface is of no value. I'm going to delete this commit. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] XComp commented on a change in pull request #18189: [FLINK-25430] Replace RunningJobRegistry by JobResultStore
XComp commented on a change in pull request #18189: URL: https://github.com/apache/flink/pull/18189#discussion_r787472097 ## File path: flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/runner/SessionDispatcherLeaderProcessTest.java ## @@ -130,28 +146,56 @@ public void start_triggersJobGraphRecoveryAndDispatcherServiceCreation() throws final CompletableFuture> recoveredJobGraphsFuture = new CompletableFuture<>(); dispatcherServiceFactory = -TestingDispatcherServiceFactory.newBuilder() -.setCreateFunction( -(fencingToken, -recoveredJobGraphs, -jobResults, -jobGraphStore, -jobResultStore) -> { - recoveredJobGraphsFuture.complete(recoveredJobGraphs); -return TestingDispatcherGatewayService.newBuilder().build(); -}) -.build(); +createFactoryBasedOnJobGraphs( +recoveredJobGraphs -> { + recoveredJobGraphsFuture.complete(recoveredJobGraphs); +return TestingDispatcherGatewayService.newBuilder().build(); +}); try (final SessionDispatcherLeaderProcess dispatcherLeaderProcess = createDispatcherLeaderProcess()) { dispatcherLeaderProcess.start(); -assertThat( -dispatcherLeaderProcess.getState(), -is(SessionDispatcherLeaderProcess.State.RUNNING)); +assertThat(recoveredJobGraphsFuture) +.succeedsWithin(100, TimeUnit.MILLISECONDS) Review comment: Why is that again? Was it about making the test timeout which would provide a stacktrace of the waiting threads? 🤔 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18169: [FLINK-25277] add shutdown hook to stop TaskExecutor on SIGTERM
flinkbot edited a comment on pull request #18169: URL: https://github.com/apache/flink/pull/18169#issuecomment-998925832 ## CI report: * 35011a0fb8ca36b38d854e88a2937357b8736f4d UNKNOWN * fc7193f9336b272ced363c100517ad4f4f793804 UNKNOWN * b220c66e7dd126f894235bc66b8fe5ba8b56c89b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29665) * e27a3578963f34f97398efab3e1b0e182e9d6198 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18368: [FLINK-18356][Azure] Enforce memory limits for docker containers
flinkbot edited a comment on pull request #18368: URL: https://github.com/apache/flink/pull/18368#issuecomment-1013194055 ## CI report: * 13a059ef4d8b011d5cc1cac10861e0bafa98c6d2 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29645) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18169: [FLINK-25277] add shutdown hook to stop TaskExecutor on SIGTERM
flinkbot edited a comment on pull request #18169: URL: https://github.com/apache/flink/pull/18169#issuecomment-998925832 ## CI report: * 35011a0fb8ca36b38d854e88a2937357b8736f4d UNKNOWN * fc7193f9336b272ced363c100517ad4f4f793804 UNKNOWN * b220c66e7dd126f894235bc66b8fe5ba8b56c89b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29665) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-table-store] JingsongLi commented on a change in pull request #12: [FLINK-25689] Introduce atomic commit
JingsongLi commented on a change in pull request #12: URL: https://github.com/apache/flink-table-store/pull/12#discussion_r787408231 ## File path: flink-table-store-core/src/main/java/org/apache/flink/table/store/file/FileStoreOptions.java ## @@ -0,0 +1,53 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.store.file; + +import org.apache.flink.configuration.ConfigOption; +import org.apache.flink.configuration.ConfigOptions; +import org.apache.flink.configuration.MemorySize; +import org.apache.flink.configuration.ReadableConfig; + +/** Options for {@link FileStore}. */ +public class FileStoreOptions { + +public static final ConfigOption BUCKET = +ConfigOptions.key("bucket") +.intType() +.defaultValue(1) +.withDescription( +"Bucket number for file store and partition number for Kafka."); Review comment: remove `and partition number for Kafka`, log system is abstraction. ## File path: flink-table-store-core/src/main/java/org/apache/flink/table/store/file/manifest/ManifestFileMeta.java ## @@ -118,4 +121,94 @@ public String toString() { numDeletedFiles, Arrays.toString(partitionStats)); } + +/** + * Merge several {@link ManifestFileMeta}s. {@link ManifestEntry}s representing first adding and + * then deleting the same sst file will cancel each other. + * + * NOTE: This method is atomic. + */ +public static List merge( +List metas, ManifestFile manifestFile, long suggestedMetaSize) { +List result = new ArrayList<>(); +// these are the newly created manifest files, clean them up if exception occurs +List newMetas = new ArrayList<>(); +List candidate = new ArrayList<>(); +long totalSize = 0; + +try { +for (ManifestFileMeta manifest : metas) { +totalSize += manifest.fileSize; +candidate.add(manifest); +if (totalSize >= suggestedMetaSize) { +// reach suggested file size, perform merging and produce new file +merge(candidate, manifestFile, result, newMetas); +candidate.clear(); +totalSize = 0; +} +} +if (!candidate.isEmpty()) { +// merge the last bit of metas +merge(candidate, manifestFile, result, newMetas); +} +} catch (Throwable e) { +// exception occurs, clean up and rethrow +for (ManifestFileMeta manifest : newMetas) { +manifestFile.delete(manifest.fileName); +} +throw e; +} + +return result; +} + +private static void merge( Review comment: Inline this method? So simple and too many arguements ## File path: flink-table-store-core/src/main/java/org/apache/flink/table/store/file/operation/FileStoreCommitImpl.java ## @@ -0,0 +1,376 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.store.file.operation; + +import org.apache.flink.core.fs.FileSystem; +import org.apache.flink.core.fs.Path; +import org.apache.flink.table.data.binary.BinaryRowData; +import org.apache.flink.table.store.file.FileStoreOptions; +import or
[GitHub] [flink] flinkbot edited a comment on pull request #18368: [FLINK-18356][Azure] Enforce memory limits for docker containers
flinkbot edited a comment on pull request #18368: URL: https://github.com/apache/flink/pull/18368#issuecomment-1013194055 ## CI report: * 13a059ef4d8b011d5cc1cac10861e0bafa98c6d2 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29645) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18398: [FLINK-25695][flink-table] Fix state leak in temporalRowTimeJoinOper…
flinkbot edited a comment on pull request #18398: URL: https://github.com/apache/flink/pull/18398#issuecomment-1016025272 ## CI report: * 0fb5e7a2b2f583a36775e1d7347080e834e0d806 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29679) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18169: [FLINK-25277] add shutdown hook to stop TaskExecutor on SIGTERM
flinkbot edited a comment on pull request #18169: URL: https://github.com/apache/flink/pull/18169#issuecomment-998925832 ## CI report: * 35011a0fb8ca36b38d854e88a2937357b8736f4d UNKNOWN * fc7193f9336b272ced363c100517ad4f4f793804 UNKNOWN * b220c66e7dd126f894235bc66b8fe5ba8b56c89b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29665) * e27a3578963f34f97398efab3e1b0e182e9d6198 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18169: [FLINK-25277] add shutdown hook to stop TaskExecutor on SIGTERM
flinkbot edited a comment on pull request #18169: URL: https://github.com/apache/flink/pull/18169#issuecomment-998925832 ## CI report: * 35011a0fb8ca36b38d854e88a2937357b8736f4d UNKNOWN * fc7193f9336b272ced363c100517ad4f4f793804 UNKNOWN * b220c66e7dd126f894235bc66b8fe5ba8b56c89b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29665) * e27a3578963f34f97398efab3e1b0e182e9d6198 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29693) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-25649) Scheduling jobs fails with org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException
[ https://issues.apache.org/jira/browse/FLINK-25649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17478436#comment-17478436 ] Gil De Grove commented on FLINK-25649: -- [~zhuzh] [~trohrmann] , I was able to reproduce the issue this morning, I'm gathering the logs and will clean them then upload it there, the logs will contain the logs for both the job-manager of our HA deployment. This morning, I actually only have DataStream (full stream) jobs that run, 3 of them cannot be scheduled, they all have a parallelism of 3. The job-manager reports 30 task slots available. >> One question I have is that when you say "100% of the jobs working", do you >> mean "All tasks of all jobs are in RUNNING state at the same time", or "All >> bounded jobs can finish and all unbounded jobs' tasks are in RUNING state"? I mean; all the tasks of all the jobs, bounded or unbounded, are scheduled at the same time on the cluster. > Scheduling jobs fails with > org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException > - > > Key: FLINK-25649 > URL: https://issues.apache.org/jira/browse/FLINK-25649 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.13.1 >Reporter: Gil De Grove >Priority: Major > > Following comment from Till on this [SO > question|https://stackoverflow.com/questions/70683048/scheduling-jobs-fails-with-org-apache-flink-runtime-jobmanager-scheduler-noresou?noredirect=1#comment124980546_70683048] > h2. *Summary* > We are currently experiencing a scheduling issue with our flink cluster. > The symptoms are that some/most/all (it depend, the symptoms are not always > the same) of our tasks are showed as _SCHEDULED_ but fail after a timeout. > The jobs are them showed a _RUNNING_ > The failing exception is the following one: > {{Caused by: java.util.concurrent.CompletionException: > org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: > Slot request bulk is not fulfillable! Could not allocate the required slot > within slot request timeout}} > After analysis, we assume (we cannot prove it, as there are not that much > logs for that part of the code) that the failure is due to a deadlock/race > condition that is happening when several jobs are being submitted at the same > time to the flink cluster, even though we have enough slots available in the > cluster. > We actually have the error with 52 available task slots, and have 12 jobs > that are not scheduled. > h2. Additional information > * Flink version: 1.13.1 commit a7f3192 > * Flink cluster in session mode > * 2 Job managers using k8s HA mode (resource requests: 2 CPU, 4Gb Ram, > limits sets on memory to 4Gb) > * 50 task managers with 2 slots each (resource requests: 2 CPUs, 2GB Ram. No > limits set). > * Our Flink cluster is shut down every night, and restarted every morning. > The error seems to occur when a lot of jobs needs to be scheduled. The jobs > are configured to restore their state, and we do not see any issues for jobs > that are being scheduled and run correctly, it seems to really be related to > a scheduling issue. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Comment Edited] (FLINK-25681) CodeSplitITCase produces large stacktraces and potentially out of memory errors
[ https://issues.apache.org/jira/browse/FLINK-25681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17478438#comment-17478438 ] Caizhi Weng edited comment on FLINK-25681 at 1/19/22, 8:56 AM: --- Hi [~MartijnVisser]! Thanks for looking into this issue. This test will first run a large SQL script without code splitting (to make sure that this SQL script fails) and then runs with code splitting (to make sure it succeed). I guess it is the first part that produces a deep stack trace. I’ll remove the first part (we can determine the failure by hand when constructing the test case) and let’s see if this issue is still happening. was (Author: tsreaper): Hi [~MartijnVisser]! Thanks for looking into this issue. This test will first run a large SQL script without code splitting (to make sure that this SQL script fails) and then runs with code splitting (to make sure it succeed). I guess it is the first part that produces a deep stack trace. I’ll remove the first part (we can determine the failure by hand when constructing the test case) and let’s see if this issue is still happening. > CodeSplitITCase produces large stacktraces and potentially out of memory > errors > --- > > Key: FLINK-25681 > URL: https://issues.apache.org/jira/browse/FLINK-25681 > Project: Flink > Issue Type: Bug > Components: Table SQL / API, Table SQL / Planner >Reporter: Martijn Visser >Priority: Critical > > There's currently an open PR https://github.com/apache/flink/pull/18368 to > enforce memory limits for Docker containers to mitigate or resolve > [https://issues.apache.org/jira/browse/FLINK-18356|https://issues.apache.org/jira/browse/FLINK-18356?focusedCommentId=17476183&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17476183] > As mentioned in the above Flink ticket, the table tests fail. A run can be > found at > https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29461 > When looking into the logs, I noticed that there are large stacktraces > reported on org.apache.flink.table.planner.runtime.batch.sql.CodeSplitITCase. > The logs show numerous errors for > * > {{testSelectManyColumns(org.apache.flink.table.planner.runtime.batch.sql.CodeSplitITCase)}} > * > {{testManyAggregations(org.apache.flink.table.planner.runtime.batch.sql.CodeSplitITCase)}} > > * > {{testManyOrsInCondition(org.apache.flink.table.planner.runtime.batch.sql.CodeSplitITCase)}} > Some examples: > {code:java} > 08:43:03,192 [flink-akka.actor.default-dispatcher-5] INFO > org.apache.flink.runtime.executiongraph.ExecutionGraph [] - > HashAggregate[30254] -> Sink: Collect table sink (1/1) > (c515430ab543bf59471fce8c77f83ad7) switched from INITIALIZING to FAILED on > efa7d996-3752-473e-87ff-66b403a09f38 @ localhost (dataPort=-1). > java.lang.RuntimeException: Could not instantiate generated class > 'NoGroupingAggregateWithoutKeys$363111' > at > org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:85) > ~[flink-table-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.table.runtime.operators.CodeGenOperatorFactory.createStreamOperator(CodeGenOperatorFactory.java:40) > ~[flink-table-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.streaming.api.operators.StreamOperatorFactoryUtil.createOperator(StreamOperatorFactoryUtil.java:81) > ~[flink-streaming-java-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:201) > ~[flink-streaming-java-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.(RegularOperatorChain.java:60) > ~[flink-streaming-java-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:653) > ~[flink-streaming-java-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:641) > ~[flink-streaming-java-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948) > ~[flink-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:917) > ~[flink-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741) > ~[flink-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563) > ~[flink-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_292] > Caused b
[jira] [Commented] (FLINK-25681) CodeSplitITCase produces large stacktraces and potentially out of memory errors
[ https://issues.apache.org/jira/browse/FLINK-25681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17478438#comment-17478438 ] Caizhi Weng commented on FLINK-25681: - Hi [~MartijnVisser]! Thanks for looking into this issue. This test will first run a large SQL script without code splitting (to make sure that this SQL script fails) and then runs with code splitting (to make sure it succeed). I guess it is the first part that produces a deep stack trace. I’ll remove the first part (we can determine the failure by hand when constructing the test case) and let’s see if this issue is still happening. > CodeSplitITCase produces large stacktraces and potentially out of memory > errors > --- > > Key: FLINK-25681 > URL: https://issues.apache.org/jira/browse/FLINK-25681 > Project: Flink > Issue Type: Bug > Components: Table SQL / API, Table SQL / Planner >Reporter: Martijn Visser >Priority: Critical > > There's currently an open PR https://github.com/apache/flink/pull/18368 to > enforce memory limits for Docker containers to mitigate or resolve > [https://issues.apache.org/jira/browse/FLINK-18356|https://issues.apache.org/jira/browse/FLINK-18356?focusedCommentId=17476183&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17476183] > As mentioned in the above Flink ticket, the table tests fail. A run can be > found at > https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29461 > When looking into the logs, I noticed that there are large stacktraces > reported on org.apache.flink.table.planner.runtime.batch.sql.CodeSplitITCase. > The logs show numerous errors for > * > {{testSelectManyColumns(org.apache.flink.table.planner.runtime.batch.sql.CodeSplitITCase)}} > * > {{testManyAggregations(org.apache.flink.table.planner.runtime.batch.sql.CodeSplitITCase)}} > > * > {{testManyOrsInCondition(org.apache.flink.table.planner.runtime.batch.sql.CodeSplitITCase)}} > Some examples: > {code:java} > 08:43:03,192 [flink-akka.actor.default-dispatcher-5] INFO > org.apache.flink.runtime.executiongraph.ExecutionGraph [] - > HashAggregate[30254] -> Sink: Collect table sink (1/1) > (c515430ab543bf59471fce8c77f83ad7) switched from INITIALIZING to FAILED on > efa7d996-3752-473e-87ff-66b403a09f38 @ localhost (dataPort=-1). > java.lang.RuntimeException: Could not instantiate generated class > 'NoGroupingAggregateWithoutKeys$363111' > at > org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:85) > ~[flink-table-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.table.runtime.operators.CodeGenOperatorFactory.createStreamOperator(CodeGenOperatorFactory.java:40) > ~[flink-table-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.streaming.api.operators.StreamOperatorFactoryUtil.createOperator(StreamOperatorFactoryUtil.java:81) > ~[flink-streaming-java-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:201) > ~[flink-streaming-java-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.(RegularOperatorChain.java:60) > ~[flink-streaming-java-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:653) > ~[flink-streaming-java-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:641) > ~[flink-streaming-java-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948) > ~[flink-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:917) > ~[flink-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741) > ~[flink-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563) > ~[flink-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_292] > Caused by: org.apache.flink.util.FlinkRuntimeException: > org.apache.flink.api.common.InvalidProgramException: Table program cannot be > compiled. This is a bug. Please file an issue. > at > org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:86) > ~[flink-table-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:102) > ~[flink-table-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apach
[GitHub] [flink] slinkydeveloper commented on a change in pull request #18349: [FLINK-25609][table] Anonymous/inline tables don't require ObjectIdentifier anymore
slinkydeveloper commented on a change in pull request #18349: URL: https://github.com/apache/flink/pull/18349#discussion_r787503439 ## File path: flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ddl/CreateTableASOperation.java ## @@ -26,25 +26,32 @@ import java.util.Collections; import java.util.LinkedHashMap; import java.util.Map; +import java.util.function.Supplier; /** Operation to describe a CREATE TABLE AS statement. */ @Internal public class CreateTableASOperation implements CreateOperation { private final CreateTableOperation createTableOperation; -private final SinkModifyOperation insertOperation; +private final Supplier insertOperationFactory; + +private SinkModifyOperation insertOperation; Review comment: I've modified a bit the structure of `CreateTableASOperation` to make the summary string senseful (and similar to `SinkModifyOperation`) and remove that internal mutability. check now -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-25649) Scheduling jobs fails with org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException
[ https://issues.apache.org/jira/browse/FLINK-25649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17478449#comment-17478449 ] Gil De Grove commented on FLINK-25649: -- [~zhuzh] [~trohrmann], The logs are quite heavy, and contains some confidential information. Would it be possible to only share them with you two, or to a specific email address to avoid any leak in the apache Jira ? If not, we will remove the log that may contain the confidential informations > Scheduling jobs fails with > org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException > - > > Key: FLINK-25649 > URL: https://issues.apache.org/jira/browse/FLINK-25649 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.13.1 >Reporter: Gil De Grove >Priority: Major > > Following comment from Till on this [SO > question|https://stackoverflow.com/questions/70683048/scheduling-jobs-fails-with-org-apache-flink-runtime-jobmanager-scheduler-noresou?noredirect=1#comment124980546_70683048] > h2. *Summary* > We are currently experiencing a scheduling issue with our flink cluster. > The symptoms are that some/most/all (it depend, the symptoms are not always > the same) of our tasks are showed as _SCHEDULED_ but fail after a timeout. > The jobs are them showed a _RUNNING_ > The failing exception is the following one: > {{Caused by: java.util.concurrent.CompletionException: > org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: > Slot request bulk is not fulfillable! Could not allocate the required slot > within slot request timeout}} > After analysis, we assume (we cannot prove it, as there are not that much > logs for that part of the code) that the failure is due to a deadlock/race > condition that is happening when several jobs are being submitted at the same > time to the flink cluster, even though we have enough slots available in the > cluster. > We actually have the error with 52 available task slots, and have 12 jobs > that are not scheduled. > h2. Additional information > * Flink version: 1.13.1 commit a7f3192 > * Flink cluster in session mode > * 2 Job managers using k8s HA mode (resource requests: 2 CPU, 4Gb Ram, > limits sets on memory to 4Gb) > * 50 task managers with 2 slots each (resource requests: 2 CPUs, 2GB Ram. No > limits set). > * Our Flink cluster is shut down every night, and restarted every morning. > The error seems to occur when a lot of jobs needs to be scheduled. The jobs > are configured to restore their state, and we do not see any issues for jobs > that are being scheduled and run correctly, it seems to really be related to > a scheduling issue. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] flinkbot edited a comment on pull request #17720: [FLINK-24210][serializer] Return the correct serialized length for wi…
flinkbot edited a comment on pull request #17720: URL: https://github.com/apache/flink/pull/17720#issuecomment-963141306 ## CI report: * cec9ccbee64a29f43eb24f42dfd8db48e0d9e08a Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29681) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] rkhachatryan commented on pull request #18393: Revert "[FLINK-25399][tests] Disable changelog backend in tests"
rkhachatryan commented on pull request #18393: URL: https://github.com/apache/flink/pull/18393#issuecomment-1016229700 Thanks for the review. I think the remaining bugs don't affect correctness so I'm going to merge this PR. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-18356) Exit code 137 returned from process
[ https://issues.apache.org/jira/browse/FLINK-18356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17478464#comment-17478464 ] Martijn Visser commented on FLINK-18356: [~dwysakowicz] I've talked to [~TsReaper] about https://issues.apache.org/jira/browse/FLINK-25681 which could cause the table tests to fail. There's an idea on how to solve FLINK-25681. In order to validate that this idea indeed works, the idea is to rebase the fix on your PR [https://github.com/apache/flink/pull/18368] and run it on Azure. Do you think that's indeed the right approach or any other suggestion for this? > Exit code 137 returned from process > --- > > Key: FLINK-18356 > URL: https://issues.apache.org/jira/browse/FLINK-18356 > Project: Flink > Issue Type: Bug > Components: Build System / Azure Pipelines, Tests >Affects Versions: 1.12.0, 1.13.0, 1.14.0, 1.15.0 >Reporter: Piotr Nowojski >Assignee: Dawid Wysakowicz >Priority: Blocker > Labels: pull-request-available, test-stability > Fix For: 1.15.0 > > > {noformat} > = test session starts > == > platform linux -- Python 3.7.3, pytest-5.4.3, py-1.8.2, pluggy-0.13.1 > cachedir: .tox/py37-cython/.pytest_cache > rootdir: /__w/3/s/flink-python > collected 568 items > pyflink/common/tests/test_configuration.py ..[ > 1%] > pyflink/common/tests/test_execution_config.py ...[ > 5%] > pyflink/dataset/tests/test_execution_environment.py . > ##[error]Exit code 137 returned from process: file name '/bin/docker', > arguments 'exec -i -u 1002 > 97fc4e22522d2ced1f4d23096b8929045d083dd0a99a4233a8b20d0489e9bddb > /__a/externals/node/bin/node /__w/_temp/containerHandlerInvoker.js'. > Finishing: Test - python > {noformat} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3729&view=logs&j=9cada3cb-c1d3-5621-16da-0f718fb86602&t=8d78fe4f-d658-5c70-12f8-4921589024c3 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink-table-store] JingsongLi commented on a change in pull request #10: [FLINK-25680] Introduce Table Store Flink Sink
JingsongLi commented on a change in pull request #10: URL: https://github.com/apache/flink-table-store/pull/10#discussion_r787516809 ## File path: flink-table-store-connector/src/main/java/org/apache/flink/table/store/connector/sink/SinkRecord.java ## @@ -0,0 +1,71 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.store.connector.sink; + +import org.apache.flink.table.data.RowData; +import org.apache.flink.table.data.binary.BinaryRowData; +import org.apache.flink.types.RowKind; + +import static org.apache.flink.util.Preconditions.checkArgument; + +/** A sink records contains key, value and partition, bucket, row kind information. */ +public class SinkRecord { + +private final BinaryRowData partition; + +private final int bucket; + +private final RowKind rowKind; + +private final BinaryRowData key; + +private final RowData row; + +public SinkRecord( +BinaryRowData partition, int bucket, RowKind rowKind, BinaryRowData key, RowData row) { +checkArgument(partition.getRowKind() == RowKind.INSERT); +checkArgument(key.getRowKind() == RowKind.INSERT); +checkArgument(row.getRowKind() == RowKind.INSERT); Review comment: We should avoid the caller to determine by this kind, the caller should use member variables. LogStore also use this class. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-table-store] JingsongLi commented on a change in pull request #10: [FLINK-25680] Introduce Table Store Flink Sink
JingsongLi commented on a change in pull request #10: URL: https://github.com/apache/flink-table-store/pull/10#discussion_r787516809 ## File path: flink-table-store-connector/src/main/java/org/apache/flink/table/store/connector/sink/SinkRecord.java ## @@ -0,0 +1,71 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.store.connector.sink; + +import org.apache.flink.table.data.RowData; +import org.apache.flink.table.data.binary.BinaryRowData; +import org.apache.flink.types.RowKind; + +import static org.apache.flink.util.Preconditions.checkArgument; + +/** A sink records contains key, value and partition, bucket, row kind information. */ +public class SinkRecord { + +private final BinaryRowData partition; + +private final int bucket; + +private final RowKind rowKind; + +private final BinaryRowData key; + +private final RowData row; + +public SinkRecord( +BinaryRowData partition, int bucket, RowKind rowKind, BinaryRowData key, RowData row) { +checkArgument(partition.getRowKind() == RowKind.INSERT); +checkArgument(key.getRowKind() == RowKind.INSERT); +checkArgument(row.getRowKind() == RowKind.INSERT); Review comment: We should avoid the caller to determine by this kind, the caller should use member variable `rowKind`. LogStore also use this class. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-table-store] JingsongLi commented on a change in pull request #10: [FLINK-25680] Introduce Table Store Flink Sink
JingsongLi commented on a change in pull request #10: URL: https://github.com/apache/flink-table-store/pull/10#discussion_r787518576 ## File path: flink-table-store-connector/src/main/java/org/apache/flink/table/store/connector/sink/StoreSinkWriter.java ## @@ -0,0 +1,163 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.store.connector.sink; + +import org.apache.flink.annotation.VisibleForTesting; +import org.apache.flink.api.connector.sink.SinkWriter; +import org.apache.flink.table.data.GenericRowData; +import org.apache.flink.table.data.RowData; +import org.apache.flink.table.data.binary.BinaryRowData; +import org.apache.flink.table.store.file.ValueKind; +import org.apache.flink.table.store.file.operation.FileStoreWrite; +import org.apache.flink.table.store.file.utils.RecordWriter; +import org.apache.flink.types.RowKind; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; + +/** A {@link SinkWriter} for dynamic store. */ +public class StoreSinkWriter implements SinkWriter { + +private final FileStoreWrite fileStoreWrite; + +private final SinkRecordConverter recordConverter; + +private final boolean overwrite; + +private final ExecutorService compactExecutor; + +private final Map> writers; + +public StoreSinkWriter( +FileStoreWrite fileStoreWrite, SinkRecordConverter recordConverter, boolean overwrite) { +this.fileStoreWrite = fileStoreWrite; +this.recordConverter = recordConverter; +this.overwrite = overwrite; +this.compactExecutor = Executors.newSingleThreadScheduledExecutor(); +this.writers = new HashMap<>(); +} + +private RecordWriter getWriter(BinaryRowData partition, int bucket) { +Map buckets = writers.get(partition); +if (buckets == null) { +buckets = new HashMap<>(); +writers.put(partition.copy(), buckets); +} +return buckets.computeIfAbsent( +bucket, +k -> +overwrite +? fileStoreWrite.createEmptyWriter( +partition, bucket, compactExecutor) +: fileStoreWrite.createWriter(partition, bucket, compactExecutor)); +} + +@Override +public void write(RowData rowData, Context context) throws IOException { +RowKind rowKind = rowData.getRowKind(); +SinkRecord record = recordConverter.convert(rowData); +RecordWriter writer = getWriter(record.partition(), record.bucket()); +try { +writeToFileStore(writer, record); +} catch (Exception e) { +throw new IOException(e); +} +rowData.setRowKind(rowKind); +} + +private void writeToFileStore(RecordWriter writer, SinkRecord record) throws Exception { +switch (record.rowKind()) { +case INSERT: +case UPDATE_AFTER: +if (record.key().getArity() == 0) { +writer.write(ValueKind.ADD, record.row(), GenericRowData.of(1)); +} else { +writer.write(ValueKind.ADD, record.key(), record.row()); +} +break; +case UPDATE_BEFORE: +case DELETE: +if (record.key().getArity() == 0) { +writer.write(ValueKind.ADD, record.row(), GenericRowData.of(-1)); +} else { +writer.write(ValueKind.DELETE, record.key(), record.row()); +} +break; +} +} + +@Override +public List prepareCommit(boolean flush) throws IOException { +try { +return prepareCommit(); +} catch (Exception e) { +throw new IOException(e); +} +} + +private List prepareCommit() throws Exception { +List committables = new ArrayList<>(); +for (BinaryRowData partition : writers.keySet()) { +Map buckets
[jira] [Commented] (FLINK-25695) TemporalJoin cause state leak in some cases
[ https://issues.apache.org/jira/browse/FLINK-25695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17478465#comment-17478465 ] Wenlong Lyu commented on FLINK-25695: - [~zicat] could you share your sql too? it would be easier for others to understand your case with the sql and configuration you used. On the other hand, have you dump the heap and analyze what is the exactly leaked? > TemporalJoin cause state leak in some cases > --- > > Key: FLINK-25695 > URL: https://issues.apache.org/jira/browse/FLINK-25695 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.14.3 >Reporter: Lyn Zhang >Priority: Major > Labels: pull-request-available > > Last year, I reported the similar bug of TemporalJoin cause state leak. > Detail: FLINK-21833 > Recently, I found the fix code can reduce the the leak size but can not > resolve it completely. > The code of line 213 cause it and the right fix is to invoke cleanUp() method. > In FLINK-21833, we discussed when the code is running on line 213, that means > Left State, Right State, registeredTimerState is empty, actually the Left > State and Right State value(MapState) is empty but the key is still be in > state, So invoke state.clear() is necessary. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (FLINK-25681) CodeSplitITCase produces large stacktraces and potentially out of memory errors
[ https://issues.apache.org/jira/browse/FLINK-25681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Martijn Visser reassigned FLINK-25681: -- Assignee: Caizhi Weng > CodeSplitITCase produces large stacktraces and potentially out of memory > errors > --- > > Key: FLINK-25681 > URL: https://issues.apache.org/jira/browse/FLINK-25681 > Project: Flink > Issue Type: Bug > Components: Table SQL / API, Table SQL / Planner >Reporter: Martijn Visser >Assignee: Caizhi Weng >Priority: Critical > > There's currently an open PR https://github.com/apache/flink/pull/18368 to > enforce memory limits for Docker containers to mitigate or resolve > [https://issues.apache.org/jira/browse/FLINK-18356|https://issues.apache.org/jira/browse/FLINK-18356?focusedCommentId=17476183&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17476183] > As mentioned in the above Flink ticket, the table tests fail. A run can be > found at > https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29461 > When looking into the logs, I noticed that there are large stacktraces > reported on org.apache.flink.table.planner.runtime.batch.sql.CodeSplitITCase. > The logs show numerous errors for > * > {{testSelectManyColumns(org.apache.flink.table.planner.runtime.batch.sql.CodeSplitITCase)}} > * > {{testManyAggregations(org.apache.flink.table.planner.runtime.batch.sql.CodeSplitITCase)}} > > * > {{testManyOrsInCondition(org.apache.flink.table.planner.runtime.batch.sql.CodeSplitITCase)}} > Some examples: > {code:java} > 08:43:03,192 [flink-akka.actor.default-dispatcher-5] INFO > org.apache.flink.runtime.executiongraph.ExecutionGraph [] - > HashAggregate[30254] -> Sink: Collect table sink (1/1) > (c515430ab543bf59471fce8c77f83ad7) switched from INITIALIZING to FAILED on > efa7d996-3752-473e-87ff-66b403a09f38 @ localhost (dataPort=-1). > java.lang.RuntimeException: Could not instantiate generated class > 'NoGroupingAggregateWithoutKeys$363111' > at > org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:85) > ~[flink-table-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.table.runtime.operators.CodeGenOperatorFactory.createStreamOperator(CodeGenOperatorFactory.java:40) > ~[flink-table-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.streaming.api.operators.StreamOperatorFactoryUtil.createOperator(StreamOperatorFactoryUtil.java:81) > ~[flink-streaming-java-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:201) > ~[flink-streaming-java-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.(RegularOperatorChain.java:60) > ~[flink-streaming-java-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:653) > ~[flink-streaming-java-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:641) > ~[flink-streaming-java-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948) > ~[flink-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:917) > ~[flink-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741) > ~[flink-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563) > ~[flink-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_292] > Caused by: org.apache.flink.util.FlinkRuntimeException: > org.apache.flink.api.common.InvalidProgramException: Table program cannot be > compiled. This is a bug. Please file an issue. > at > org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:86) > ~[flink-table-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:102) > ~[flink-table-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:83) > ~[flink-table-runtime-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > ... 11 more > Caused by: > org.apache.flink.shaded.guava30.com.google.common.util.concurrent.UncheckedExecutionException: > org.apache.flink.api.common.InvalidProgramException: Table program cannot be > compiled. This is a bug. Please file an issue. > at > org.apac
[GitHub] [flink-table-store] JingsongLi commented on a change in pull request #10: [FLINK-25680] Introduce Table Store Flink Sink
JingsongLi commented on a change in pull request #10: URL: https://github.com/apache/flink-table-store/pull/10#discussion_r787519403 ## File path: flink-table-store-connector/src/main/java/org/apache/flink/table/store/connector/sink/StoreSinkWriter.java ## @@ -0,0 +1,163 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.store.connector.sink; + +import org.apache.flink.annotation.VisibleForTesting; +import org.apache.flink.api.connector.sink.SinkWriter; +import org.apache.flink.table.data.GenericRowData; +import org.apache.flink.table.data.RowData; +import org.apache.flink.table.data.binary.BinaryRowData; +import org.apache.flink.table.store.file.ValueKind; +import org.apache.flink.table.store.file.operation.FileStoreWrite; +import org.apache.flink.table.store.file.utils.RecordWriter; +import org.apache.flink.types.RowKind; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; + +/** A {@link SinkWriter} for dynamic store. */ +public class StoreSinkWriter implements SinkWriter { + +private final FileStoreWrite fileStoreWrite; + +private final SinkRecordConverter recordConverter; + +private final boolean overwrite; + +private final ExecutorService compactExecutor; + +private final Map> writers; + +public StoreSinkWriter( +FileStoreWrite fileStoreWrite, SinkRecordConverter recordConverter, boolean overwrite) { +this.fileStoreWrite = fileStoreWrite; +this.recordConverter = recordConverter; +this.overwrite = overwrite; +this.compactExecutor = Executors.newSingleThreadScheduledExecutor(); +this.writers = new HashMap<>(); +} + +private RecordWriter getWriter(BinaryRowData partition, int bucket) { +Map buckets = writers.get(partition); +if (buckets == null) { +buckets = new HashMap<>(); +writers.put(partition.copy(), buckets); +} +return buckets.computeIfAbsent( +bucket, +k -> +overwrite +? fileStoreWrite.createEmptyWriter( +partition, bucket, compactExecutor) +: fileStoreWrite.createWriter(partition, bucket, compactExecutor)); +} + +@Override +public void write(RowData rowData, Context context) throws IOException { +RowKind rowKind = rowData.getRowKind(); +SinkRecord record = recordConverter.convert(rowData); +RecordWriter writer = getWriter(record.partition(), record.bucket()); +try { +writeToFileStore(writer, record); +} catch (Exception e) { +throw new IOException(e); +} +rowData.setRowKind(rowKind); +} + +private void writeToFileStore(RecordWriter writer, SinkRecord record) throws Exception { +switch (record.rowKind()) { +case INSERT: +case UPDATE_AFTER: +if (record.key().getArity() == 0) { +writer.write(ValueKind.ADD, record.row(), GenericRowData.of(1)); +} else { +writer.write(ValueKind.ADD, record.key(), record.row()); +} +break; +case UPDATE_BEFORE: +case DELETE: +if (record.key().getArity() == 0) { +writer.write(ValueKind.ADD, record.row(), GenericRowData.of(-1)); +} else { +writer.write(ValueKind.DELETE, record.key(), record.row()); +} +break; +} +} + +@Override +public List prepareCommit(boolean flush) throws IOException { +try { +return prepareCommit(); +} catch (Exception e) { +throw new IOException(e); +} +} + +private List prepareCommit() throws Exception { +List committables = new ArrayList<>(); +for (BinaryRowData partition : writers.keySet()) { +Map buckets
[GitHub] [flink] flinkbot edited a comment on pull request #18349: [FLINK-25609][table] Anonymous/inline tables don't require ObjectIdentifier anymore
flinkbot edited a comment on pull request #18349: URL: https://github.com/apache/flink/pull/18349#issuecomment-1011991081 ## CI report: * e4399cdc5964a3f49354cb114f5c043f5b2702dc Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29654) * 6a2b67f6358c91a443a8b2d99643e5298d08ba14 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18349: [FLINK-25609][table] Anonymous/inline tables don't require ObjectIdentifier anymore
flinkbot edited a comment on pull request #18349: URL: https://github.com/apache/flink/pull/18349#issuecomment-1011991081 ## CI report: * e4399cdc5964a3f49354cb114f5c043f5b2702dc Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29654) * 6a2b67f6358c91a443a8b2d99643e5298d08ba14 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29694) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-25698) Elasticsearch7DynamicSinkITCase.testWritingDocuments fails on AZP
Till Rohrmann created FLINK-25698: - Summary: Elasticsearch7DynamicSinkITCase.testWritingDocuments fails on AZP Key: FLINK-25698 URL: https://issues.apache.org/jira/browse/FLINK-25698 Project: Flink Issue Type: Bug Components: Connectors / ElasticSearch Affects Versions: 1.14.3 Reporter: Till Rohrmann The test {{Elasticsearch7DynamicSinkITCase.testWritingDocuments}} fails on AZP with {code} 2022-01-19T01:36:13.5231872Z Jan 19 01:36:13 [ERROR] Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 60.838 s <<< FAILURE! - in org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7DynamicSinkITCase 2022-01-19T01:36:13.5233438Z Jan 19 01:36:13 [ERROR] testWritingDocuments Time elapsed: 32.146 s <<< ERROR! 2022-01-19T01:36:13.5234330Z Jan 19 01:36:13 org.apache.flink.runtime.client.JobExecutionException: Job execution failed. 2022-01-19T01:36:13.5235274Z Jan 19 01:36:13at org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) 2022-01-19T01:36:13.5238310Z Jan 19 01:36:13at org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137) 2022-01-19T01:36:13.5239309Z Jan 19 01:36:13at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) 2022-01-19T01:36:13.5239953Z Jan 19 01:36:13at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) 2022-01-19T01:36:13.5240822Z Jan 19 01:36:13at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) 2022-01-19T01:36:13.5241441Z Jan 19 01:36:13at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) 2022-01-19T01:36:13.5242318Z Jan 19 01:36:13at org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:258) 2022-01-19T01:36:13.5243144Z Jan 19 01:36:13at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) 2022-01-19T01:36:13.5244370Z Jan 19 01:36:13at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) 2022-01-19T01:36:13.5245319Z Jan 19 01:36:13at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) 2022-01-19T01:36:13.5246074Z Jan 19 01:36:13at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) 2022-01-19T01:36:13.5246970Z Jan 19 01:36:13at org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1389) 2022-01-19T01:36:13.5247832Z Jan 19 01:36:13at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93) 2022-01-19T01:36:13.5248788Z Jan 19 01:36:13at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68) 2022-01-19T01:36:13.5249775Z Jan 19 01:36:13at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92) 2022-01-19T01:36:13.5250826Z Jan 19 01:36:13at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) 2022-01-19T01:36:13.5251625Z Jan 19 01:36:13at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) 2022-01-19T01:36:13.5252531Z Jan 19 01:36:13at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) 2022-01-19T01:36:13.5253441Z Jan 19 01:36:13at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) 2022-01-19T01:36:13.5254118Z Jan 19 01:36:13at org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47) 2022-01-19T01:36:13.5254753Z Jan 19 01:36:13at akka.dispatch.OnComplete.internal(Future.scala:300) 2022-01-19T01:36:13.5255381Z Jan 19 01:36:13at akka.dispatch.OnComplete.internal(Future.scala:297) 2022-01-19T01:36:13.5256202Z Jan 19 01:36:13at akka.dispatch.japi$CallbackBridge.apply(Future.scala:224) 2022-01-19T01:36:13.5256842Z Jan 19 01:36:13at akka.dispatch.japi$CallbackBridge.apply(Future.scala:221) 2022-01-19T01:36:13.5257400Z Jan 19 01:36:13at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60) 2022-01-19T01:36:13.5258296Z Jan 19 01:36:13at org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65) 2022-01-19T01:36:13.5259150Z Jan 19 01:36:13at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68) 2022-01-19T01:36:13.5259935Z Jan 19 01:36:13at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284) 2022-01-19T01:36:13.5260764Z Jan 19 01:36:13at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284) 2022-01-19T01:36:13.5261518Z
[jira] [Commented] (FLINK-25695) TemporalJoin cause state leak in some cases
[ https://issues.apache.org/jira/browse/FLINK-25695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17478471#comment-17478471 ] Wenlong Lyu commented on FLINK-25695: - According to the description in FLINK-21833, I think the root cause may be the event time timer registered not cleaned when retention happened. the right way to fix maybe remove the event timer when cleaning up the all of the state. > TemporalJoin cause state leak in some cases > --- > > Key: FLINK-25695 > URL: https://issues.apache.org/jira/browse/FLINK-25695 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.14.3 >Reporter: Lyn Zhang >Priority: Major > Labels: pull-request-available > > Last year, I reported the similar bug of TemporalJoin cause state leak. > Detail: FLINK-21833 > Recently, I found the fix code can reduce the the leak size but can not > resolve it completely. > The code of line 213 cause it and the right fix is to invoke cleanUp() method. > In FLINK-21833, we discussed when the code is running on line 213, that means > Left State, Right State, registeredTimerState is empty, actually the Left > State and Right State value(MapState) is empty but the key is still be in > state, So invoke state.clear() is necessary. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-21834) org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset fail
[ https://issues.apache.org/jira/browse/FLINK-21834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17478472#comment-17478472 ] Till Rohrmann commented on FLINK-21834: --- Another instance: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29668&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d&l=10260 > org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset > fail > - > > Key: FLINK-21834 > URL: https://issues.apache.org/jira/browse/FLINK-21834 > Project: Flink > Issue Type: Bug > Components: FileSystems >Affects Versions: 1.12.2, 1.13.2, 1.15.0 >Reporter: Guowei Ma >Priority: Critical > Labels: test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14847&view=logs&j=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89&t=5d6e4255-0ea8-5e2a-f52c-c881b7872361&l=10893 > Maybe we need print what the exception is when `recover` is called. > {code:java} > java.lang.AssertionError > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.fail(Assert.java:95) > at > org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset(AbstractRecoverableWriterTest.java:381) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (FLINK-21834) org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset fail
[ https://issues.apache.org/jira/browse/FLINK-21834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann updated FLINK-21834: -- Labels: test-stability (was: auto-deprioritized-major test-stability) > org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset > fail > - > > Key: FLINK-21834 > URL: https://issues.apache.org/jira/browse/FLINK-21834 > Project: Flink > Issue Type: Bug > Components: FileSystems >Affects Versions: 1.12.2, 1.13.2, 1.15.0 >Reporter: Guowei Ma >Priority: Critical > Labels: test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14847&view=logs&j=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89&t=5d6e4255-0ea8-5e2a-f52c-c881b7872361&l=10893 > Maybe we need print what the exception is when `recover` is called. > {code:java} > java.lang.AssertionError > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.fail(Assert.java:95) > at > org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset(AbstractRecoverableWriterTest.java:381) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (FLINK-21834) org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset fail
[ https://issues.apache.org/jira/browse/FLINK-21834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann updated FLINK-21834: -- Affects Version/s: 1.15.0 > org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset > fail > - > > Key: FLINK-21834 > URL: https://issues.apache.org/jira/browse/FLINK-21834 > Project: Flink > Issue Type: Bug > Components: FileSystems >Affects Versions: 1.12.2, 1.13.2, 1.15.0 >Reporter: Guowei Ma >Priority: Critical > Labels: auto-deprioritized-major, test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14847&view=logs&j=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89&t=5d6e4255-0ea8-5e2a-f52c-c881b7872361&l=10893 > Maybe we need print what the exception is when `recover` is called. > {code:java} > java.lang.AssertionError > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.fail(Assert.java:95) > at > org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset(AbstractRecoverableWriterTest.java:381) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (FLINK-21834) org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset fail
[ https://issues.apache.org/jira/browse/FLINK-21834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann updated FLINK-21834: -- Priority: Critical (was: Minor) > org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset > fail > - > > Key: FLINK-21834 > URL: https://issues.apache.org/jira/browse/FLINK-21834 > Project: Flink > Issue Type: Bug > Components: FileSystems >Affects Versions: 1.12.2, 1.13.2 >Reporter: Guowei Ma >Priority: Critical > Labels: auto-deprioritized-major, test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14847&view=logs&j=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89&t=5d6e4255-0ea8-5e2a-f52c-c881b7872361&l=10893 > Maybe we need print what the exception is when `recover` is called. > {code:java} > java.lang.AssertionError > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.fail(Assert.java:95) > at > org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset(AbstractRecoverableWriterTest.java:381) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] flinkbot edited a comment on pull request #17601: [FLINK-24697][flink-connectors-kafka] add auto.offset.reset configuration for group-offsets startup mode
flinkbot edited a comment on pull request #17601: URL: https://github.com/apache/flink/pull/17601#issuecomment-954546978 ## CI report: * 124c7893234b31fc1a3ce5b3b29aff32e3b7847d Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29683) * 7667bd94a9d0af29b443a1b07f6e7077f5b9cce6 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29691) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18102: [FLINK-25033][runtime] Let some scheduler components updatable
flinkbot edited a comment on pull request #18102: URL: https://github.com/apache/flink/pull/18102#issuecomment-993332110 ## CI report: * 2182c3801f333d4b4827b177bb05ec04c648ea87 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29682) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-24163) PartiallyFinishedSourcesITCase fails due to timeout
[ https://issues.apache.org/jira/browse/FLINK-24163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17478475#comment-17478475 ] Till Rohrmann commented on FLINK-24163: --- Another instance: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29668&view=logs&j=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3&t=0c010d0c-3dec-5bf1-d408-7b18988b1b2b&l=10402 > PartiallyFinishedSourcesITCase fails due to timeout > --- > > Key: FLINK-24163 > URL: https://issues.apache.org/jira/browse/FLINK-24163 > Project: Flink > Issue Type: Bug > Components: API / DataStream >Affects Versions: 1.14.0, 1.15.0 >Reporter: Xintong Song >Assignee: Yun Gao >Priority: Blocker > Labels: pull-request-available, test-stability > Fix For: 1.14.0, 1.15.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23529&view=logs&j=4d4a0d10-fca2-5507-8eed-c07f0bdf4887&t=7b25afdf-cc6c-566f-5459-359dc2585798&l=10996 > {code} > Sep 04 04:35:28 [ERROR] Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, > Time elapsed: 155.236 s <<< FAILURE! - in > org.apache.flink.runtime.operators.lifecycle.PartiallyFinishedSourcesITCase > Sep 04 04:35:28 [ERROR] test[complex graph ALL_SUBTASKS, failover: false] > Time elapsed: 65.999 s <<< ERROR! > Sep 04 04:35:28 java.util.concurrent.TimeoutException: Condition was not met > in given timeout. > Sep 04 04:35:28 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:164) > Sep 04 04:35:28 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:142) > Sep 04 04:35:28 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:134) > Sep 04 04:35:28 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitForSubtasksToFinish(CommonTestUtils.java:297) > Sep 04 04:35:28 at > org.apache.flink.runtime.operators.lifecycle.TestJobExecutor.waitForSubtasksToFinish(TestJobExecutor.java:219) > Sep 04 04:35:28 at > org.apache.flink.runtime.operators.lifecycle.PartiallyFinishedSourcesITCase.test(PartiallyFinishedSourcesITCase.java:82) > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] wenlong88 commented on pull request #18017: [FLINK-25171] Validation of duplicate fields in ddl sql
wenlong88 commented on pull request #18017: URL: https://github.com/apache/flink/pull/18017#issuecomment-1016246672 @jelly-1203 thanks for the following up, I would ping @godfreyhe offline to follow the pr. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-24163) PartiallyFinishedSourcesITCase fails due to timeout
[ https://issues.apache.org/jira/browse/FLINK-24163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17478479#comment-17478479 ] Till Rohrmann commented on FLINK-24163: --- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29668&view=logs&j=4d4a0d10-fca2-5507-8eed-c07f0bdf4887&t=7b25afdf-cc6c-566f-5459-359dc2585798&l=16828 > PartiallyFinishedSourcesITCase fails due to timeout > --- > > Key: FLINK-24163 > URL: https://issues.apache.org/jira/browse/FLINK-24163 > Project: Flink > Issue Type: Bug > Components: API / DataStream >Affects Versions: 1.14.0, 1.15.0 >Reporter: Xintong Song >Assignee: Yun Gao >Priority: Blocker > Labels: pull-request-available, test-stability > Fix For: 1.14.0, 1.15.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23529&view=logs&j=4d4a0d10-fca2-5507-8eed-c07f0bdf4887&t=7b25afdf-cc6c-566f-5459-359dc2585798&l=10996 > {code} > Sep 04 04:35:28 [ERROR] Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, > Time elapsed: 155.236 s <<< FAILURE! - in > org.apache.flink.runtime.operators.lifecycle.PartiallyFinishedSourcesITCase > Sep 04 04:35:28 [ERROR] test[complex graph ALL_SUBTASKS, failover: false] > Time elapsed: 65.999 s <<< ERROR! > Sep 04 04:35:28 java.util.concurrent.TimeoutException: Condition was not met > in given timeout. > Sep 04 04:35:28 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:164) > Sep 04 04:35:28 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:142) > Sep 04 04:35:28 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:134) > Sep 04 04:35:28 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitForSubtasksToFinish(CommonTestUtils.java:297) > Sep 04 04:35:28 at > org.apache.flink.runtime.operators.lifecycle.TestJobExecutor.waitForSubtasksToFinish(TestJobExecutor.java:219) > Sep 04 04:35:28 at > org.apache.flink.runtime.operators.lifecycle.PartiallyFinishedSourcesITCase.test(PartiallyFinishedSourcesITCase.java:82) > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink-table-store] tsreaper commented on a change in pull request #10: [FLINK-25680] Introduce Table Store Flink Sink
tsreaper commented on a change in pull request #10: URL: https://github.com/apache/flink-table-store/pull/10#discussion_r787537977 ## File path: flink-table-store-connector/src/main/java/org/apache/flink/table/store/connector/sink/StoreSinkWriter.java ## @@ -0,0 +1,163 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.store.connector.sink; + +import org.apache.flink.annotation.VisibleForTesting; +import org.apache.flink.api.connector.sink.SinkWriter; +import org.apache.flink.table.data.GenericRowData; +import org.apache.flink.table.data.RowData; +import org.apache.flink.table.data.binary.BinaryRowData; +import org.apache.flink.table.store.file.ValueKind; +import org.apache.flink.table.store.file.operation.FileStoreWrite; +import org.apache.flink.table.store.file.utils.RecordWriter; +import org.apache.flink.types.RowKind; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; + +/** A {@link SinkWriter} for dynamic store. */ +public class StoreSinkWriter implements SinkWriter { + +private final FileStoreWrite fileStoreWrite; + +private final SinkRecordConverter recordConverter; + +private final boolean overwrite; + +private final ExecutorService compactExecutor; + +private final Map> writers; + +public StoreSinkWriter( +FileStoreWrite fileStoreWrite, SinkRecordConverter recordConverter, boolean overwrite) { +this.fileStoreWrite = fileStoreWrite; +this.recordConverter = recordConverter; +this.overwrite = overwrite; +this.compactExecutor = Executors.newSingleThreadScheduledExecutor(); +this.writers = new HashMap<>(); +} + +private RecordWriter getWriter(BinaryRowData partition, int bucket) { +Map buckets = writers.get(partition); +if (buckets == null) { +buckets = new HashMap<>(); +writers.put(partition.copy(), buckets); +} +return buckets.computeIfAbsent( +bucket, +k -> +overwrite +? fileStoreWrite.createEmptyWriter( +partition, bucket, compactExecutor) +: fileStoreWrite.createWriter(partition, bucket, compactExecutor)); +} + +@Override +public void write(RowData rowData, Context context) throws IOException { +RowKind rowKind = rowData.getRowKind(); +SinkRecord record = recordConverter.convert(rowData); +RecordWriter writer = getWriter(record.partition(), record.bucket()); +try { +writeToFileStore(writer, record); +} catch (Exception e) { +throw new IOException(e); +} +rowData.setRowKind(rowKind); +} + +private void writeToFileStore(RecordWriter writer, SinkRecord record) throws Exception { +switch (record.rowKind()) { +case INSERT: +case UPDATE_AFTER: +if (record.key().getArity() == 0) { +writer.write(ValueKind.ADD, record.row(), GenericRowData.of(1)); +} else { +writer.write(ValueKind.ADD, record.key(), record.row()); +} +break; +case UPDATE_BEFORE: +case DELETE: +if (record.key().getArity() == 0) { +writer.write(ValueKind.ADD, record.row(), GenericRowData.of(-1)); +} else { +writer.write(ValueKind.DELETE, record.key(), record.row()); +} +break; +} +} + +@Override +public List prepareCommit(boolean flush) throws IOException { +try { +return prepareCommit(); +} catch (Exception e) { +throw new IOException(e); +} +} + +private List prepareCommit() throws Exception { +List committables = new ArrayList<>(); +for (BinaryRowData partition : writers.keySet()) { +Map buckets =
[jira] [Commented] (FLINK-25698) Elasticsearch7DynamicSinkITCase.testWritingDocuments fails on AZP
[ https://issues.apache.org/jira/browse/FLINK-25698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17478481#comment-17478481 ] Martijn Visser commented on FLINK-25698: CC [~alexanderpreuss] [~fpaul] > Elasticsearch7DynamicSinkITCase.testWritingDocuments fails on AZP > - > > Key: FLINK-25698 > URL: https://issues.apache.org/jira/browse/FLINK-25698 > Project: Flink > Issue Type: Bug > Components: Connectors / ElasticSearch >Affects Versions: 1.14.3 >Reporter: Till Rohrmann >Priority: Critical > Labels: test-stability > > The test {{Elasticsearch7DynamicSinkITCase.testWritingDocuments}} fails on > AZP with > {code} > 2022-01-19T01:36:13.5231872Z Jan 19 01:36:13 [ERROR] Tests run: 4, Failures: > 0, Errors: 1, Skipped: 0, Time elapsed: 60.838 s <<< FAILURE! - in > org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7DynamicSinkITCase > 2022-01-19T01:36:13.5233438Z Jan 19 01:36:13 [ERROR] testWritingDocuments > Time elapsed: 32.146 s <<< ERROR! > 2022-01-19T01:36:13.5234330Z Jan 19 01:36:13 > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > 2022-01-19T01:36:13.5235274Z Jan 19 01:36:13 at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) > 2022-01-19T01:36:13.5238310Z Jan 19 01:36:13 at > org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137) > 2022-01-19T01:36:13.5239309Z Jan 19 01:36:13 at > java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) > 2022-01-19T01:36:13.5239953Z Jan 19 01:36:13 at > java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) > 2022-01-19T01:36:13.5240822Z Jan 19 01:36:13 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2022-01-19T01:36:13.5241441Z Jan 19 01:36:13 at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > 2022-01-19T01:36:13.5242318Z Jan 19 01:36:13 at > org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:258) > 2022-01-19T01:36:13.5243144Z Jan 19 01:36:13 at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > 2022-01-19T01:36:13.5244370Z Jan 19 01:36:13 at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > 2022-01-19T01:36:13.5245319Z Jan 19 01:36:13 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2022-01-19T01:36:13.5246074Z Jan 19 01:36:13 at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > 2022-01-19T01:36:13.5246970Z Jan 19 01:36:13 at > org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1389) > 2022-01-19T01:36:13.5247832Z Jan 19 01:36:13 at > org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93) > 2022-01-19T01:36:13.5248788Z Jan 19 01:36:13 at > org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68) > 2022-01-19T01:36:13.5249775Z Jan 19 01:36:13 at > org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92) > 2022-01-19T01:36:13.5250826Z Jan 19 01:36:13 at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > 2022-01-19T01:36:13.5251625Z Jan 19 01:36:13 at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > 2022-01-19T01:36:13.5252531Z Jan 19 01:36:13 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2022-01-19T01:36:13.5253441Z Jan 19 01:36:13 at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > 2022-01-19T01:36:13.5254118Z Jan 19 01:36:13 at > org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47) > 2022-01-19T01:36:13.5254753Z Jan 19 01:36:13 at > akka.dispatch.OnComplete.internal(Future.scala:300) > 2022-01-19T01:36:13.5255381Z Jan 19 01:36:13 at > akka.dispatch.OnComplete.internal(Future.scala:297) > 2022-01-19T01:36:13.5256202Z Jan 19 01:36:13 at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:224) > 2022-01-19T01:36:13.5256842Z Jan 19 01:36:13 at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:221) > 2022-01-19T01:36:13.5257400Z Jan 19 01:36:13 at > scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60) > 2022-01-19T01:36:13.5258296Z Jan 19 01:36:13 at > org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65) > 2022-01-19T01:36:13.525
[GitHub] [flink] flinkbot edited a comment on pull request #18368: [FLINK-18356][Azure] Enforce memory limits for docker containers
flinkbot edited a comment on pull request #18368: URL: https://github.com/apache/flink/pull/18368#issuecomment-1013194055 ## CI report: * 13a059ef4d8b011d5cc1cac10861e0bafa98c6d2 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29645) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-24163) PartiallyFinishedSourcesITCase fails due to timeout
[ https://issues.apache.org/jira/browse/FLINK-24163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17478483#comment-17478483 ] Roman Khachatryan commented on FLINK-24163: --- Hi Yun, sure, I'll take a look. As for the mentioned commit, it reverted the changes to order behavior which is slower. So the increase there is expected. > PartiallyFinishedSourcesITCase fails due to timeout > --- > > Key: FLINK-24163 > URL: https://issues.apache.org/jira/browse/FLINK-24163 > Project: Flink > Issue Type: Bug > Components: API / DataStream >Affects Versions: 1.14.0, 1.15.0 >Reporter: Xintong Song >Assignee: Yun Gao >Priority: Blocker > Labels: pull-request-available, test-stability > Fix For: 1.14.0, 1.15.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23529&view=logs&j=4d4a0d10-fca2-5507-8eed-c07f0bdf4887&t=7b25afdf-cc6c-566f-5459-359dc2585798&l=10996 > {code} > Sep 04 04:35:28 [ERROR] Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, > Time elapsed: 155.236 s <<< FAILURE! - in > org.apache.flink.runtime.operators.lifecycle.PartiallyFinishedSourcesITCase > Sep 04 04:35:28 [ERROR] test[complex graph ALL_SUBTASKS, failover: false] > Time elapsed: 65.999 s <<< ERROR! > Sep 04 04:35:28 java.util.concurrent.TimeoutException: Condition was not met > in given timeout. > Sep 04 04:35:28 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:164) > Sep 04 04:35:28 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:142) > Sep 04 04:35:28 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:134) > Sep 04 04:35:28 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitForSubtasksToFinish(CommonTestUtils.java:297) > Sep 04 04:35:28 at > org.apache.flink.runtime.operators.lifecycle.TestJobExecutor.waitForSubtasksToFinish(TestJobExecutor.java:219) > Sep 04 04:35:28 at > org.apache.flink.runtime.operators.lifecycle.PartiallyFinishedSourcesITCase.test(PartiallyFinishedSourcesITCase.java:82) > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (FLINK-25695) TemporalJoin cause state leak in some cases
[ https://issues.apache.org/jira/browse/FLINK-25695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lyn Zhang updated FLINK-25695: -- Attachment: test.sql > TemporalJoin cause state leak in some cases > --- > > Key: FLINK-25695 > URL: https://issues.apache.org/jira/browse/FLINK-25695 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.14.3 >Reporter: Lyn Zhang >Priority: Major > Labels: pull-request-available > Attachments: test.sql > > > Last year, I reported the similar bug of TemporalJoin cause state leak. > Detail: FLINK-21833 > Recently, I found the fix code can reduce the the leak size but can not > resolve it completely. > The code of line 213 cause it and the right fix is to invoke cleanUp() method. > In FLINK-21833, we discussed when the code is running on line 213, that means > Left State, Right State, registeredTimerState is empty, actually the Left > State and Right State value(MapState) is empty but the key is still be in > state, So invoke state.clear() is necessary. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Comment Edited] (FLINK-24163) PartiallyFinishedSourcesITCase fails due to timeout
[ https://issues.apache.org/jira/browse/FLINK-24163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17478483#comment-17478483 ] Roman Khachatryan edited comment on FLINK-24163 at 1/19/22, 9:38 AM: - Hi Yun, sure, I'll take a look. As for the mentioned commit, it reverted the changes back to the older behavior which is slower. So the increase there is expected. was (Author: roman_khachatryan): Hi Yun, sure, I'll take a look. As for the mentioned commit, it reverted the changes to order behavior which is slower. So the increase there is expected. > PartiallyFinishedSourcesITCase fails due to timeout > --- > > Key: FLINK-24163 > URL: https://issues.apache.org/jira/browse/FLINK-24163 > Project: Flink > Issue Type: Bug > Components: API / DataStream >Affects Versions: 1.14.0, 1.15.0 >Reporter: Xintong Song >Assignee: Yun Gao >Priority: Blocker > Labels: pull-request-available, test-stability > Fix For: 1.14.0, 1.15.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23529&view=logs&j=4d4a0d10-fca2-5507-8eed-c07f0bdf4887&t=7b25afdf-cc6c-566f-5459-359dc2585798&l=10996 > {code} > Sep 04 04:35:28 [ERROR] Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, > Time elapsed: 155.236 s <<< FAILURE! - in > org.apache.flink.runtime.operators.lifecycle.PartiallyFinishedSourcesITCase > Sep 04 04:35:28 [ERROR] test[complex graph ALL_SUBTASKS, failover: false] > Time elapsed: 65.999 s <<< ERROR! > Sep 04 04:35:28 java.util.concurrent.TimeoutException: Condition was not met > in given timeout. > Sep 04 04:35:28 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:164) > Sep 04 04:35:28 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:142) > Sep 04 04:35:28 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:134) > Sep 04 04:35:28 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitForSubtasksToFinish(CommonTestUtils.java:297) > Sep 04 04:35:28 at > org.apache.flink.runtime.operators.lifecycle.TestJobExecutor.waitForSubtasksToFinish(TestJobExecutor.java:219) > Sep 04 04:35:28 at > org.apache.flink.runtime.operators.lifecycle.PartiallyFinishedSourcesITCase.test(PartiallyFinishedSourcesITCase.java:82) > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-25695) TemporalJoin cause state leak in some cases
[ https://issues.apache.org/jira/browse/FLINK-25695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17478486#comment-17478486 ] Lyn Zhang commented on FLINK-25695: --- The sql like below and set table.exec.state.ttl = 1min [^test.sql] > TemporalJoin cause state leak in some cases > --- > > Key: FLINK-25695 > URL: https://issues.apache.org/jira/browse/FLINK-25695 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.14.3 >Reporter: Lyn Zhang >Priority: Major > Labels: pull-request-available > Attachments: test.sql > > > Last year, I reported the similar bug of TemporalJoin cause state leak. > Detail: FLINK-21833 > Recently, I found the fix code can reduce the the leak size but can not > resolve it completely. > The code of line 213 cause it and the right fix is to invoke cleanUp() method. > In FLINK-21833, we discussed when the code is running on line 213, that means > Left State, Right State, registeredTimerState is empty, actually the Left > State and Right State value(MapState) is empty but the key is still be in > state, So invoke state.clear() is necessary. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] flinkbot edited a comment on pull request #18394: [FLINK-25520][Table SQL/API] Implement "ALTER TABLE ... COMPACT" SQL
flinkbot edited a comment on pull request #18394: URL: https://github.com/apache/flink/pull/18394#issuecomment-1015323011 ## CI report: * cc5eaf1dd5be0414a77c59f619e407c01d083e0c Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29685) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-25699) Use HashMap for MAP value constructors
Timo Walther created FLINK-25699: Summary: Use HashMap for MAP value constructors Key: FLINK-25699 URL: https://issues.apache.org/jira/browse/FLINK-25699 Project: Flink Issue Type: Sub-task Components: Table SQL / Runtime Reporter: Timo Walther Currently, the usage of maps is inconsistent. It is not ensured that duplicate keys get eliminated. For CAST and output conversion this is solved. However, we should have a similar implementation in MAP value constructor like in CAST. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] JingsongLi commented on pull request #18359: [FLINK-25484][connectors/filesystem] Support inactivityInterval config in table api
JingsongLi commented on pull request #18359: URL: https://github.com/apache/flink/pull/18359#issuecomment-1016258587 Thanks @lichang-bd , can you add test to verify? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-25700) flinksql Cascading Window TVF Aggregation exception
simenliuxing created FLINK-25700: Summary: flinksql Cascading Window TVF Aggregation exception Key: FLINK-25700 URL: https://issues.apache.org/jira/browse/FLINK-25700 Project: Flink Issue Type: Bug Components: Table SQL / Planner Affects Versions: 1.14.0 Reporter: simenliuxing flink version 1.14.0 sql {code:java} CREATE TABLE order_test ( order_price int, order_item varchar, ts as localtimestamp, WATERMARK FOR ts as ts - INTERVAL '5' SECOND ) WITH ( 'connector' = 'datagen' ); create view window_view2 as select window_start, window_end, window_time as rowtime, sum(order_price) partial_price from TABLE( TUMBLE( TABLE order_test, DESCRIPTOR(ts), INTERVAL '2' SECOND) ) group by order_price, window_start, window_end, window_time; select window_start, window_end, sum(partial_price) total_price from TABLE( TUMBLE( TABLE window_view2, DESCRIPTOR(rowtime), INTERVAL '10' SECOND) ) group by window_start, window_end;{code} exception {code:java} Caused by: org.apache.calcite.sql.validate.SqlValidatorException: Column 'window_start' is ambiguous at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:467) at org.apache.calcite.runtime.Resources$ExInst.ex(Resources.java:560) ... 41 more{code} I think it's a bug -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (FLINK-25700) flinksql Cascading Window TVF Aggregation exception
[ https://issues.apache.org/jira/browse/FLINK-25700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] simenliuxing updated FLINK-25700: - Attachment: image-2022-01-19-17-45-57-124.png > flinksql Cascading Window TVF Aggregation exception > > > Key: FLINK-25700 > URL: https://issues.apache.org/jira/browse/FLINK-25700 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.14.0 >Reporter: simenliuxing >Priority: Major > Attachments: image-2022-01-19-17-45-57-124.png > > > flink version 1.14.0 > sql > > {code:java} > CREATE TABLE order_test > ( > order_price int, > order_item varchar, > ts as localtimestamp, > WATERMARK FOR ts as ts - INTERVAL '5' SECOND > ) WITH ( > 'connector' = 'datagen' > ); > create view window_view2 as > select window_start, window_end, window_time as rowtime, sum(order_price) > partial_price > from TABLE( > TUMBLE( > TABLE order_test, DESCRIPTOR(ts), INTERVAL '2' SECOND) > ) > group by order_price, window_start, window_end, window_time; > select window_start, window_end, sum(partial_price) total_price > from TABLE( > TUMBLE( > TABLE window_view2, DESCRIPTOR(rowtime), INTERVAL '10' SECOND) > ) > group by window_start, window_end;{code} > exception > > {code:java} > Caused by: org.apache.calcite.sql.validate.SqlValidatorException: Column > 'window_start' is ambiguous > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:467) > at org.apache.calcite.runtime.Resources$ExInst.ex(Resources.java:560) > ... 41 more{code} > > > I think it's a bug -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (FLINK-25700) flinksql Cascading Window TVF Aggregation exception
[ https://issues.apache.org/jira/browse/FLINK-25700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] simenliuxing updated FLINK-25700: - Attachment: (was: image-2022-01-19-17-45-57-124.png) > flinksql Cascading Window TVF Aggregation exception > > > Key: FLINK-25700 > URL: https://issues.apache.org/jira/browse/FLINK-25700 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.14.0 >Reporter: simenliuxing >Priority: Major > > flink version 1.14.0 > sql > > {code:java} > CREATE TABLE order_test > ( > order_price int, > order_item varchar, > ts as localtimestamp, > WATERMARK FOR ts as ts - INTERVAL '5' SECOND > ) WITH ( > 'connector' = 'datagen' > ); > create view window_view2 as > select window_start, window_end, window_time as rowtime, sum(order_price) > partial_price > from TABLE( > TUMBLE( > TABLE order_test, DESCRIPTOR(ts), INTERVAL '2' SECOND) > ) > group by order_price, window_start, window_end, window_time; > select window_start, window_end, sum(partial_price) total_price > from TABLE( > TUMBLE( > TABLE window_view2, DESCRIPTOR(rowtime), INTERVAL '10' SECOND) > ) > group by window_start, window_end;{code} > exception > > {code:java} > Caused by: org.apache.calcite.sql.validate.SqlValidatorException: Column > 'window_start' is ambiguous > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:467) > at org.apache.calcite.runtime.Resources$ExInst.ex(Resources.java:560) > ... 41 more{code} > > > I think it's a bug -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-25700) flinksql Cascading Window TVF Aggregation exception
[ https://issues.apache.org/jira/browse/FLINK-25700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17478494#comment-17478494 ] simenliuxing commented on FLINK-25700: -- [~jark] hi,can you take a look? > flinksql Cascading Window TVF Aggregation exception > > > Key: FLINK-25700 > URL: https://issues.apache.org/jira/browse/FLINK-25700 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.14.0 >Reporter: simenliuxing >Priority: Major > > flink version 1.14.0 > sql > > {code:java} > CREATE TABLE order_test > ( > order_price int, > order_item varchar, > ts as localtimestamp, > WATERMARK FOR ts as ts - INTERVAL '5' SECOND > ) WITH ( > 'connector' = 'datagen' > ); > create view window_view2 as > select window_start, window_end, window_time as rowtime, sum(order_price) > partial_price > from TABLE( > TUMBLE( > TABLE order_test, DESCRIPTOR(ts), INTERVAL '2' SECOND) > ) > group by order_price, window_start, window_end, window_time; > select window_start, window_end, sum(partial_price) total_price > from TABLE( > TUMBLE( > TABLE window_view2, DESCRIPTOR(rowtime), INTERVAL '10' SECOND) > ) > group by window_start, window_end;{code} > exception > > {code:java} > Caused by: org.apache.calcite.sql.validate.SqlValidatorException: Column > 'window_start' is ambiguous > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:467) > at org.apache.calcite.runtime.Resources$ExInst.ex(Resources.java:560) > ... 41 more{code} > > > I think it's a bug -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] snuyanzin commented on a change in pull request #18287: [FLINK-17321][table] Add support casting of map to map and multiset to multiset
snuyanzin commented on a change in pull request #18287: URL: https://github.com/apache/flink/pull/18287#discussion_r787556416 ## File path: flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/ExprCodeGenerator.scala ## @@ -722,9 +722,9 @@ class ExprCodeGenerator(ctx: CodeGeneratorContext, nullableInput: Boolean) case ARRAY_VALUE_CONSTRUCTOR => generateArray(ctx, resultType, operands) - // maps and multisets - case MAP_VALUE_CONSTRUCTOR | MULTISET_VALUE => -generateMapOrMultiset(ctx, resultType, operands) + // maps + case MAP_VALUE_CONSTRUCTOR => Review comment: hi @twalthr , thanks for highlighting this yes I will try to handle this -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-25485) JDBC connector implicitly append url options when use mysql
[ https://issues.apache.org/jira/browse/FLINK-25485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ada Wong updated FLINK-25485: - Description: When we directly use buffer-flush options of the mysql sink, buffer-flush options(sink.buffer-flush.max-rows, sink.buffer-flush.interval) can not increase throughput. We must set 'rewriteBatchedStatements=true' into mysql jdbc url, for 'sink.buffer-flush' to take effect. I want add a method appendUrlSuffix in MySQLDialect to auto set this jdbc option. Many users forget or don't know this option. Inspired by alibaba DataX. [https://github.com/alibaba/DataX/blob/master/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/util/DataBaseType.java] was: When we directly use mysql sink, buffer-flush options(sink.buffer-flush.max-rows, sink.buffer-flush.interval) can not increase throughput. We must set 'rewriteBatchedStatements=true' into mysql jdbc url, for 'sink.buffer-flush' to take effect. I want add a method appendUrlSuffix in MySQLDialect to auto set this jdbc option. Many users forget or don't know this option. Inspired by alibaba DataX. [https://github.com/alibaba/DataX/blob/master/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/util/DataBaseType.java] > JDBC connector implicitly append url options when use mysql > --- > > Key: FLINK-25485 > URL: https://issues.apache.org/jira/browse/FLINK-25485 > Project: Flink > Issue Type: Improvement > Components: Connectors / JDBC >Affects Versions: 1.14.2 >Reporter: Ada Wong >Priority: Major > > When we directly use buffer-flush options of the mysql sink, buffer-flush > options(sink.buffer-flush.max-rows, sink.buffer-flush.interval) can not > increase throughput. > We must set 'rewriteBatchedStatements=true' into mysql jdbc url, for > 'sink.buffer-flush' to take effect. > I want add a method appendUrlSuffix in MySQLDialect to auto set this jdbc > option. > Many users forget or don't know this option. > Inspired by alibaba DataX. > [https://github.com/alibaba/DataX/blob/master/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/util/DataBaseType.java] -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-25485) JDBC connector implicitly append url options when use mysql
[ https://issues.apache.org/jira/browse/FLINK-25485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17478499#comment-17478499 ] Ada Wong commented on FLINK-25485: -- cc [~jark] [~fpaul] [~roman] So many users reflect this problem in China DingTalk Group. > JDBC connector implicitly append url options when use mysql > --- > > Key: FLINK-25485 > URL: https://issues.apache.org/jira/browse/FLINK-25485 > Project: Flink > Issue Type: Improvement > Components: Connectors / JDBC >Affects Versions: 1.14.2 >Reporter: Ada Wong >Priority: Major > > When we directly use buffer-flush options of the mysql sink, buffer-flush > options(sink.buffer-flush.max-rows, sink.buffer-flush.interval) can not > increase throughput. > We must set 'rewriteBatchedStatements=true' into mysql jdbc url, for > 'sink.buffer-flush' to take effect. > I want add a method appendUrlSuffix in MySQLDialect to auto set this jdbc > option. > Many users forget or don't know this option. > Inspired by alibaba DataX. > [https://github.com/alibaba/DataX/blob/master/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/util/DataBaseType.java] -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Comment Edited] (FLINK-25700) flinksql Cascading Window TVF Aggregation exception
[ https://issues.apache.org/jira/browse/FLINK-25700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17478494#comment-17478494 ] simenliuxing edited comment on FLINK-25700 at 1/19/22, 9:52 AM: [~jark] hi,can you take a look? may be not a bug , It's just a problem with the documentation was (Author: simen): [~jark] hi,can you take a look? > flinksql Cascading Window TVF Aggregation exception > > > Key: FLINK-25700 > URL: https://issues.apache.org/jira/browse/FLINK-25700 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.14.0 >Reporter: simenliuxing >Priority: Major > > flink version 1.14.0 > sql > > {code:java} > CREATE TABLE order_test > ( > order_price int, > order_item varchar, > ts as localtimestamp, > WATERMARK FOR ts as ts - INTERVAL '5' SECOND > ) WITH ( > 'connector' = 'datagen' > ); > create view window_view2 as > select window_start, window_end, window_time as rowtime, sum(order_price) > partial_price > from TABLE( > TUMBLE( > TABLE order_test, DESCRIPTOR(ts), INTERVAL '2' SECOND) > ) > group by order_price, window_start, window_end, window_time; > select window_start, window_end, sum(partial_price) total_price > from TABLE( > TUMBLE( > TABLE window_view2, DESCRIPTOR(rowtime), INTERVAL '10' SECOND) > ) > group by window_start, window_end;{code} > exception > > {code:java} > Caused by: org.apache.calcite.sql.validate.SqlValidatorException: Column > 'window_start' is ambiguous > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:467) > at org.apache.calcite.runtime.Resources$ExInst.ex(Resources.java:560) > ... 41 more{code} > > > I think it's a bug -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-25699) Use HashMap for MAP value constructors
[ https://issues.apache.org/jira/browse/FLINK-25699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17478500#comment-17478500 ] Sergey Nuyanzin commented on FLINK-25699: - Hi [~twalthr] Am I correct that this issue is related to your comment in Map to Map casting PR here https://github.com/apache/flink/pull/18287#discussion_r786612124 If yes and if you don't mind I would pick this issue > Use HashMap for MAP value constructors > -- > > Key: FLINK-25699 > URL: https://issues.apache.org/jira/browse/FLINK-25699 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Runtime >Reporter: Timo Walther >Priority: Major > > Currently, the usage of maps is inconsistent. It is not ensured that > duplicate keys get eliminated. For CAST and output conversion this is solved. > However, we should have a similar implementation in MAP value constructor > like in CAST. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] rkhachatryan merged pull request #18393: Revert "[FLINK-25399][tests] Disable changelog backend in tests"
rkhachatryan merged pull request #18393: URL: https://github.com/apache/flink/pull/18393 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-25485) JDBC connector implicitly append url options when use mysql
[ https://issues.apache.org/jira/browse/FLINK-25485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ada Wong updated FLINK-25485: - Description: When we directly use buffer-flush options of the mysql sink (sink.buffer-flush.max-rows and sink.buffer-flush.interval) can not increase throughput. We must set 'rewriteBatchedStatements=true' into mysql jdbc url, for 'sink.buffer-flush' to take effect. I want add a method appendUrlSuffix in MySQLDialect to auto set this jdbc option. Many users forget or don't know this option. Inspired by alibaba DataX. [https://github.com/alibaba/DataX/blob/master/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/util/DataBaseType.java] was: When we directly use buffer-flush options of the mysql sink, buffer-flush options(sink.buffer-flush.max-rows, sink.buffer-flush.interval) can not increase throughput. We must set 'rewriteBatchedStatements=true' into mysql jdbc url, for 'sink.buffer-flush' to take effect. I want add a method appendUrlSuffix in MySQLDialect to auto set this jdbc option. Many users forget or don't know this option. Inspired by alibaba DataX. [https://github.com/alibaba/DataX/blob/master/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/util/DataBaseType.java] > JDBC connector implicitly append url options when use mysql > --- > > Key: FLINK-25485 > URL: https://issues.apache.org/jira/browse/FLINK-25485 > Project: Flink > Issue Type: Improvement > Components: Connectors / JDBC >Affects Versions: 1.14.2 >Reporter: Ada Wong >Priority: Major > > When we directly use buffer-flush options of the mysql sink > (sink.buffer-flush.max-rows and sink.buffer-flush.interval) can not increase > throughput. > We must set 'rewriteBatchedStatements=true' into mysql jdbc url, for > 'sink.buffer-flush' to take effect. > I want add a method appendUrlSuffix in MySQLDialect to auto set this jdbc > option. > Many users forget or don't know this option. > Inspired by alibaba DataX. > [https://github.com/alibaba/DataX/blob/master/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/util/DataBaseType.java] -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] LadyForest commented on a change in pull request #18394: [FLINK-25520][Table SQL/API] Implement "ALTER TABLE ... COMPACT" SQL
LadyForest commented on a change in pull request #18394: URL: https://github.com/apache/flink/pull/18394#discussion_r787401902 ## File path: flink-formats/flink-orc/pom.xml ## @@ -145,6 +145,19 @@ under the License. test test-jar + Review comment: `TestManagedFactory` can be removed from `org.apache.flink.table.factories.Factory` file under the `table-planner` module to avoid adding a test-jar dependency. But `flink-orc` and `flink-parquet` cannot eliminate `guava` test dependency, o.w. there will be a `NoClassDefFoundError` thrown (this transitive dependency is introduced by Calcite) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18303: [FLINK-25085][runtime] Add a scheduled thread pool for periodic tasks in RpcEndpoint
flinkbot edited a comment on pull request #18303: URL: https://github.com/apache/flink/pull/18303#issuecomment-1008005239 ## CI report: * 6024e28f7d315a7d1157f9af33da9b815f81a27a Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29646) * ac345a207c5ac67b7dcb8ed34ad939a701c1c834 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29686) * bf542cd367ad218778d1eb093da1ee3e1c68728d UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-25485) JDBC connector implicitly append url options when use mysql
[ https://issues.apache.org/jira/browse/FLINK-25485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ada Wong updated FLINK-25485: - Description: When we directly use buffer-flush options of the mysql sink (sink.buffer-flush.max-rows and sink.buffer-flush.interval) can not increase throughput. We must set 'rewriteBatchedStatements=true' into mysql jdbc url, for 'sink.buffer-flush' to take effect. Such as: jdbc:mysql://ip:3306/dbname?rewriteBatchedStatements=true I want add a method appendUrlSuffix in MySQLDialect to auto set this jdbc option. Many users forget or don't know this option. Inspired by alibaba DataX. [https://github.com/alibaba/DataX/blob/master/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/util/DataBaseType.java] was: When we directly use buffer-flush options of the mysql sink (sink.buffer-flush.max-rows and sink.buffer-flush.interval) can not increase throughput. We must set 'rewriteBatchedStatements=true' into mysql jdbc url, for 'sink.buffer-flush' to take effect. I want add a method appendUrlSuffix in MySQLDialect to auto set this jdbc option. Many users forget or don't know this option. Inspired by alibaba DataX. [https://github.com/alibaba/DataX/blob/master/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/util/DataBaseType.java] > JDBC connector implicitly append url options when use mysql > --- > > Key: FLINK-25485 > URL: https://issues.apache.org/jira/browse/FLINK-25485 > Project: Flink > Issue Type: Improvement > Components: Connectors / JDBC >Affects Versions: 1.14.2 >Reporter: Ada Wong >Priority: Major > > When we directly use buffer-flush options of the mysql sink > (sink.buffer-flush.max-rows and sink.buffer-flush.interval) can not increase > throughput. > We must set 'rewriteBatchedStatements=true' into mysql jdbc url, for > 'sink.buffer-flush' to take effect. > Such as: jdbc:mysql://ip:3306/dbname?rewriteBatchedStatements=true > I want add a method appendUrlSuffix in MySQLDialect to auto set this jdbc > option. > Many users forget or don't know this option. > Inspired by alibaba DataX. > [https://github.com/alibaba/DataX/blob/master/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/util/DataBaseType.java] -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-14304) Avoid task starvation with mailbox
[ https://issues.apache.org/jira/browse/FLINK-14304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17478503#comment-17478503 ] Bo Cui commented on FLINK-14304: hi [~arvid] why add the condition? https://github.com/apache/flink/blob/6512214cb3d5774ae40c8ccae8785a48dc868e2b/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/mailbox/TaskMailboxImpl.java#L260 > Avoid task starvation with mailbox > -- > > Key: FLINK-14304 > URL: https://issues.apache.org/jira/browse/FLINK-14304 > Project: Flink > Issue Type: Improvement > Components: Runtime / Task >Reporter: Arvid Heise >Assignee: Arvid Heise >Priority: Major > Labels: pull-request-available > Fix For: 1.10.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Currently, all mails are always prioritized over regular input, which makes > sense in most cases. However, it's easy to devise an operator that gets into > starvation: each mail enqueues a new mail. > This ticket implements a simple extension in the mailbox processor: instead > of draining the mailbox one-by-one, fetch all mails from the mailbox and run > them one-by-one before running the default action. Only then, fetch all mails > again and repeat. > So we execute all mails that are available at the start of this loop but no > mails that are added in the meantime. > Special attention needs to be directed towards yield to downstream, such that > it doesn't process mails outside of the current batch. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] LadyForest commented on a change in pull request #18394: [FLINK-25520][Table SQL/API] Implement "ALTER TABLE ... COMPACT" SQL
LadyForest commented on a change in pull request #18394: URL: https://github.com/apache/flink/pull/18394#discussion_r787562855 ## File path: flink-table/flink-table-common/src/test/java/org/apache/flink/table/connector/source/TestManagedTableSink.java ## @@ -0,0 +1,468 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.connector.source; + +import org.apache.flink.api.common.serialization.Encoder; +import org.apache.flink.api.connector.sink.Committer; +import org.apache.flink.api.connector.sink.GlobalCommitter; +import org.apache.flink.api.connector.sink.Sink; +import org.apache.flink.api.connector.sink.SinkWriter; +import org.apache.flink.core.fs.FSDataOutputStream; +import org.apache.flink.core.fs.FileSystem; +import org.apache.flink.core.fs.Path; +import org.apache.flink.core.io.SimpleVersionedSerializer; +import org.apache.flink.core.memory.DataInputDeserializer; +import org.apache.flink.core.memory.DataOutputSerializer; +import org.apache.flink.table.catalog.CatalogPartitionSpec; +import org.apache.flink.table.catalog.ObjectIdentifier; +import org.apache.flink.table.connector.ChangelogMode; +import org.apache.flink.table.connector.sink.DynamicTableSink; +import org.apache.flink.table.connector.sink.SinkProvider; +import org.apache.flink.table.connector.sink.abilities.SupportsOverwrite; +import org.apache.flink.table.connector.sink.abilities.SupportsPartitioning; +import org.apache.flink.table.data.GenericRowData; +import org.apache.flink.table.data.RowData; +import org.apache.flink.table.data.StringData; +import org.apache.flink.table.factories.DynamicTableFactory; +import org.apache.flink.table.factories.TestManagedTableFactory; +import org.apache.flink.table.utils.PartitionPathUtils; + +import java.io.IOException; +import java.io.OutputStream; +import java.io.Serializable; +import java.nio.charset.StandardCharsets; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; +import java.util.Optional; +import java.util.Set; +import java.util.UUID; +import java.util.concurrent.atomic.AtomicReference; + +/** Managed {@link DynamicTableSink} for testing. */ +public class TestManagedTableSink +implements DynamicTableSink, SupportsOverwrite, SupportsPartitioning { + +private final DynamicTableFactory.Context context; +private final Path basePath; + +private LinkedHashMap staticPartitionSpecs = new LinkedHashMap<>(); +private boolean overwrite = false; + +public TestManagedTableSink(DynamicTableFactory.Context context, Path basePath) { +this.context = context; +this.basePath = basePath; +} + +@Override +public ChangelogMode getChangelogMode(ChangelogMode requestedMode) { +return ChangelogMode.insertOnly(); +} + +@Override +public SinkRuntimeProvider getSinkRuntimeProvider(Context context) { +return SinkProvider.of(new TestManagedSink(this.context.getObjectIdentifier(), basePath)); +} + +@Override +public DynamicTableSink copy() { +TestManagedTableSink copied = new TestManagedTableSink(context, basePath); +copied.overwrite = this.overwrite; +copied.staticPartitionSpecs = this.staticPartitionSpecs; +return copied; +} + +@Override +public String asSummaryString() { +return "TestManagedSink"; +} + +@Override +public void applyOverwrite(boolean overwrite) { +this.overwrite = overwrite; +} + +@Override +public void applyStaticPartition(Map partition) { +List partitionKeys = context.getCatalogTable().getPartitionKeys(); +for (String partitionKey : partitionKeys) { +if (partition.containsKey(partitionKey)) { +staticPartitionSpecs.put(partitionKey, partition.get(partitionKey)); +} +} +} + +/** Managed {@link Sink} for testing compaction. */ +public static class TestManagedSink +implements Sink { + +private final ObjectIdentifier tableIdentifier; +private
[jira] [Updated] (FLINK-25485) JDBC connector implicitly append url options when use mysql
[ https://issues.apache.org/jira/browse/FLINK-25485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ada Wong updated FLINK-25485: - Description: When we directly use buffer-flush options of the mysql sink (sink.buffer-flush.max-rows and sink.buffer-flush.interval) can not increase throughput. We must set 'rewriteBatchedStatements=true' into mysql jdbc url, for 'sink.buffer-flush' to take effect. Such as: jdbc:mysql://ip:3306/dbname?rewriteBatchedStatements=true I want add a method appendUrlSuffix in MySQLDialect to auto set this jdbc option. Many users forget or don't know this option. Inspired by alibaba DataX. [https://github.com/alibaba/DataX/blob/master/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/util/DataBaseType.java] [such as|http://dict.youdao.com/search?q=such%20as&keyfrom=chrome.extension] [sʌtʃ æz] [详细|http://dict.youdao.com/search?q=such%20as&keyfrom=chrome.extension]X 基本翻译 比如;诸如 网络释义 [such as:|http://dict.youdao.com/search?q=such%20as&keyfrom=chrome.extension&le=eng] 例如 was: When we directly use buffer-flush options of the mysql sink (sink.buffer-flush.max-rows and sink.buffer-flush.interval) can not increase throughput. We must set 'rewriteBatchedStatements=true' into mysql jdbc url, for 'sink.buffer-flush' to take effect. Such as: jdbc:mysql://ip:3306/dbname?rewriteBatchedStatements=true I want add a method appendUrlSuffix in MySQLDialect to auto set this jdbc option. Many users forget or don't know this option. Inspired by alibaba DataX. [https://github.com/alibaba/DataX/blob/master/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/util/DataBaseType.java] > JDBC connector implicitly append url options when use mysql > --- > > Key: FLINK-25485 > URL: https://issues.apache.org/jira/browse/FLINK-25485 > Project: Flink > Issue Type: Improvement > Components: Connectors / JDBC >Affects Versions: 1.14.2 >Reporter: Ada Wong >Priority: Major > > When we directly use buffer-flush options of the mysql sink > (sink.buffer-flush.max-rows and sink.buffer-flush.interval) can not increase > throughput. > We must set 'rewriteBatchedStatements=true' into mysql jdbc url, for > 'sink.buffer-flush' to take effect. > Such as: jdbc:mysql://ip:3306/dbname?rewriteBatchedStatements=true > I want add a method appendUrlSuffix in MySQLDialect to auto set this jdbc > option. > Many users forget or don't know this option. > Inspired by alibaba DataX. > [https://github.com/alibaba/DataX/blob/master/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/util/DataBaseType.java] > [such as|http://dict.youdao.com/search?q=such%20as&keyfrom=chrome.extension] > [sʌtʃ æz] > [详细|http://dict.youdao.com/search?q=such%20as&keyfrom=chrome.extension]X > 基本翻译 > 比如;诸如 > 网络释义 > [such > as:|http://dict.youdao.com/search?q=such%20as&keyfrom=chrome.extension&le=eng] > 例如 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (FLINK-25485) JDBC connector implicitly append url options when use mysql
[ https://issues.apache.org/jira/browse/FLINK-25485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ada Wong updated FLINK-25485: - Description: When we directly use buffer-flush options of the mysql sink (sink.buffer-flush.max-rows and sink.buffer-flush.interval) can not increase throughput. We must set 'rewriteBatchedStatements=true' into mysql jdbc url, for 'sink.buffer-flush' to take effect. Such as: 'jdbc:mysql://ip:3306/dbname?rewriteBatchedStatements=true' I want add a method appendUrlSuffix in MySQLDialect to auto set this jdbc option. Many users forget or don't know this option. Inspired by alibaba DataX. [https://github.com/alibaba/DataX/blob/master/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/util/DataBaseType.java] was: When we directly use buffer-flush options of the mysql sink (sink.buffer-flush.max-rows and sink.buffer-flush.interval) can not increase throughput. We must set 'rewriteBatchedStatements=true' into mysql jdbc url, for 'sink.buffer-flush' to take effect. Such as: jdbc:mysql://ip:3306/dbname?rewriteBatchedStatements=true I want add a method appendUrlSuffix in MySQLDialect to auto set this jdbc option. Many users forget or don't know this option. Inspired by alibaba DataX. [https://github.com/alibaba/DataX/blob/master/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/util/DataBaseType.java] [such as|http://dict.youdao.com/search?q=such%20as&keyfrom=chrome.extension] [sʌtʃ æz] [详细|http://dict.youdao.com/search?q=such%20as&keyfrom=chrome.extension]X 基本翻译 比如;诸如 网络释义 [such as:|http://dict.youdao.com/search?q=such%20as&keyfrom=chrome.extension&le=eng] 例如 > JDBC connector implicitly append url options when use mysql > --- > > Key: FLINK-25485 > URL: https://issues.apache.org/jira/browse/FLINK-25485 > Project: Flink > Issue Type: Improvement > Components: Connectors / JDBC >Affects Versions: 1.14.2 >Reporter: Ada Wong >Priority: Major > > When we directly use buffer-flush options of the mysql sink > (sink.buffer-flush.max-rows and sink.buffer-flush.interval) can not increase > throughput. > We must set 'rewriteBatchedStatements=true' into mysql jdbc url, for > 'sink.buffer-flush' to take effect. > Such as: 'jdbc:mysql://ip:3306/dbname?rewriteBatchedStatements=true' > I want add a method appendUrlSuffix in MySQLDialect to auto set this jdbc > option. > Many users forget or don't know this option. > Inspired by alibaba DataX. > [https://github.com/alibaba/DataX/blob/master/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/util/DataBaseType.java] -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-25399) AZP fails with exit code 137 when running checkpointing test cases
[ https://issues.apache.org/jira/browse/FLINK-25399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17478508#comment-17478508 ] Roman Khachatryan commented on FLINK-25399: --- Changelog backend was re-enabled in tests in aae9773401ee8774e1298a3b906d4f4778748cf4 The issue itself was fixed in FLINK-25395. > AZP fails with exit code 137 when running checkpointing test cases > -- > > Key: FLINK-25399 > URL: https://issues.apache.org/jira/browse/FLINK-25399 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.15.0 >Reporter: Till Rohrmann >Assignee: Roman Khachatryan >Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.15.0 > > > The AZP build for fine grained resource management failed with exit code 137, > when running an extensive list of checkpointing tests: > {code} > 2021-12-21T06:06:08.8728404Z Dec 21 06:06:08 [INFO] Running > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase > 2021-12-21T06:06:37.6584668Z Dec 21 06:06:37 Starting > org.apache.flink.test.checkpointing.UnalignedCheckpointRescaleITCase#shouldRescaleUnalignedCheckpoint[upscale > union from 3 to 7, buffersPerChannel = 0]. > 2021-12-21T06:06:37.6585685Z Dec 21 06:06:37 Starting > org.apache.flink.test.checkpointing.UnalignedCheckpointRescaleITCase#shouldRescaleUnalignedCheckpoint[upscale > union from 3 to 7, buffersPerChannel = 0]. > 2021-12-21T06:06:37.6593448Z Dec 21 06:06:37 Finished > org.apache.flink.test.checkpointing.UnalignedCheckpointRescaleITCase#shouldRescaleUnalignedCheckpoint[upscale > union from 3 to 7, buffersPerChannel = 0]. > 2021-12-21T06:06:41.3044200Z Dec 21 06:06:41 Starting > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase#testSlidingTimeWindow[statebackend > type =MEM, buffersPerChannel = 0]. > 2021-12-21T06:06:41.3045146Z Dec 21 06:06:41 Finished > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase#testSlidingTimeWindow[statebackend > type =MEM, buffersPerChannel = 0]. > 2021-12-21T06:06:49.7482529Z Dec 21 06:06:49 Starting > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase#testTumblingTimeWindowWithKVStateMinMaxParallelism[statebackend > type =MEM, buffersPerChannel = 0]. > 2021-12-21T06:06:49.7483922Z Dec 21 06:06:49 Finished > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase#testTumblingTimeWindowWithKVStateMinMaxParallelism[statebackend > type =MEM, buffersPerChannel = 0]. > 2021-12-21T06:06:56.7462828Z Dec 21 06:06:56 Starting > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase#testTumblingTimeWindow[statebackend > type =MEM, buffersPerChannel = 0]. > 2021-12-21T06:06:56.7463831Z Dec 21 06:06:56 Finished > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase#testTumblingTimeWindow[statebackend > type =MEM, buffersPerChannel = 0]. > 2021-12-21T06:07:06.7225398Z Dec 21 06:07:06 Starting > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase#testTumblingTimeWindowWithKVStateMaxMaxParallelism[statebackend > type =MEM, buffersPerChannel = 0]. > 2021-12-21T06:07:06.7226580Z Dec 21 06:07:06 Finished > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase#testTumblingTimeWindowWithKVStateMaxMaxParallelism[statebackend > type =MEM, buffersPerChannel = 0]. > 2021-12-21T06:07:12.1987555Z Dec 21 06:07:12 Starting > org.apache.flink.test.checkpointing.UnalignedCheckpointRescaleITCase#shouldRescaleUnalignedCheckpoint[upscale > union from 3 to 7, buffersPerChannel = 1]. > 2021-12-21T06:07:12.1992168Z Dec 21 06:07:12 Starting > org.apache.flink.test.checkpointing.UnalignedCheckpointRescaleITCase#shouldRescaleUnalignedCheckpoint[upscale > union from 3 to 7, buffersPerChannel = 1]. > 2021-12-21T06:07:12.1993591Z Dec 21 06:07:12 Finished > org.apache.flink.test.checkpointing.UnalignedCheckpointRescaleITCase#shouldRescaleUnalignedCheckpoint[upscale > union from 3 to 7, buffersPerChannel = 1]. > 2021-12-21T06:07:16.3825669Z Dec 21 06:07:15 Starting > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase#testPreAggregatedTumblingTimeWindow[statebackend > type =MEM, buffersPerChannel = 0]. > 2021-12-21T06:07:16.3826827Z Dec 21 06:07:15 Finished > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase#testPreAggregatedTumblingTimeWindow[statebackend > type =MEM, buffersPerChannel = 0]. > 2021-12-21T06:07:23.4489701Z Dec 21 06:07:23 Starting > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase#testPreAggregatedSlidingTimeWindow[statebackend > type =MEM, buffersPerChannel = 0]. > 2021-12-21T06:07:23.4495250Z Dec 21 06:07:23 F
[jira] [Closed] (FLINK-25399) AZP fails with exit code 137 when running checkpointing test cases
[ https://issues.apache.org/jira/browse/FLINK-25399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Khachatryan closed FLINK-25399. - Resolution: Duplicate > AZP fails with exit code 137 when running checkpointing test cases > -- > > Key: FLINK-25399 > URL: https://issues.apache.org/jira/browse/FLINK-25399 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.15.0 >Reporter: Till Rohrmann >Assignee: Roman Khachatryan >Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.15.0 > > > The AZP build for fine grained resource management failed with exit code 137, > when running an extensive list of checkpointing tests: > {code} > 2021-12-21T06:06:08.8728404Z Dec 21 06:06:08 [INFO] Running > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase > 2021-12-21T06:06:37.6584668Z Dec 21 06:06:37 Starting > org.apache.flink.test.checkpointing.UnalignedCheckpointRescaleITCase#shouldRescaleUnalignedCheckpoint[upscale > union from 3 to 7, buffersPerChannel = 0]. > 2021-12-21T06:06:37.6585685Z Dec 21 06:06:37 Starting > org.apache.flink.test.checkpointing.UnalignedCheckpointRescaleITCase#shouldRescaleUnalignedCheckpoint[upscale > union from 3 to 7, buffersPerChannel = 0]. > 2021-12-21T06:06:37.6593448Z Dec 21 06:06:37 Finished > org.apache.flink.test.checkpointing.UnalignedCheckpointRescaleITCase#shouldRescaleUnalignedCheckpoint[upscale > union from 3 to 7, buffersPerChannel = 0]. > 2021-12-21T06:06:41.3044200Z Dec 21 06:06:41 Starting > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase#testSlidingTimeWindow[statebackend > type =MEM, buffersPerChannel = 0]. > 2021-12-21T06:06:41.3045146Z Dec 21 06:06:41 Finished > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase#testSlidingTimeWindow[statebackend > type =MEM, buffersPerChannel = 0]. > 2021-12-21T06:06:49.7482529Z Dec 21 06:06:49 Starting > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase#testTumblingTimeWindowWithKVStateMinMaxParallelism[statebackend > type =MEM, buffersPerChannel = 0]. > 2021-12-21T06:06:49.7483922Z Dec 21 06:06:49 Finished > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase#testTumblingTimeWindowWithKVStateMinMaxParallelism[statebackend > type =MEM, buffersPerChannel = 0]. > 2021-12-21T06:06:56.7462828Z Dec 21 06:06:56 Starting > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase#testTumblingTimeWindow[statebackend > type =MEM, buffersPerChannel = 0]. > 2021-12-21T06:06:56.7463831Z Dec 21 06:06:56 Finished > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase#testTumblingTimeWindow[statebackend > type =MEM, buffersPerChannel = 0]. > 2021-12-21T06:07:06.7225398Z Dec 21 06:07:06 Starting > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase#testTumblingTimeWindowWithKVStateMaxMaxParallelism[statebackend > type =MEM, buffersPerChannel = 0]. > 2021-12-21T06:07:06.7226580Z Dec 21 06:07:06 Finished > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase#testTumblingTimeWindowWithKVStateMaxMaxParallelism[statebackend > type =MEM, buffersPerChannel = 0]. > 2021-12-21T06:07:12.1987555Z Dec 21 06:07:12 Starting > org.apache.flink.test.checkpointing.UnalignedCheckpointRescaleITCase#shouldRescaleUnalignedCheckpoint[upscale > union from 3 to 7, buffersPerChannel = 1]. > 2021-12-21T06:07:12.1992168Z Dec 21 06:07:12 Starting > org.apache.flink.test.checkpointing.UnalignedCheckpointRescaleITCase#shouldRescaleUnalignedCheckpoint[upscale > union from 3 to 7, buffersPerChannel = 1]. > 2021-12-21T06:07:12.1993591Z Dec 21 06:07:12 Finished > org.apache.flink.test.checkpointing.UnalignedCheckpointRescaleITCase#shouldRescaleUnalignedCheckpoint[upscale > union from 3 to 7, buffersPerChannel = 1]. > 2021-12-21T06:07:16.3825669Z Dec 21 06:07:15 Starting > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase#testPreAggregatedTumblingTimeWindow[statebackend > type =MEM, buffersPerChannel = 0]. > 2021-12-21T06:07:16.3826827Z Dec 21 06:07:15 Finished > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase#testPreAggregatedTumblingTimeWindow[statebackend > type =MEM, buffersPerChannel = 0]. > 2021-12-21T06:07:23.4489701Z Dec 21 06:07:23 Starting > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase#testPreAggregatedSlidingTimeWindow[statebackend > type =MEM, buffersPerChannel = 0]. > 2021-12-21T06:07:23.4495250Z Dec 21 06:07:23 Finished > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase#testPreAggregatedSlidingTimeWindow[statebackend > type =MEM, buffersPerChannel = 0]
[GitHub] [flink] flinkbot edited a comment on pull request #18303: [FLINK-25085][runtime] Add a scheduled thread pool for periodic tasks in RpcEndpoint
flinkbot edited a comment on pull request #18303: URL: https://github.com/apache/flink/pull/18303#issuecomment-1008005239 ## CI report: * 6024e28f7d315a7d1157f9af33da9b815f81a27a Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29646) * ac345a207c5ac67b7dcb8ed34ad939a701c1c834 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29686) * bf542cd367ad218778d1eb093da1ee3e1c68728d Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29695) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (FLINK-24163) PartiallyFinishedSourcesITCase fails due to timeout
[ https://issues.apache.org/jira/browse/FLINK-24163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17478483#comment-17478483 ] Roman Khachatryan edited comment on FLINK-24163 at 1/19/22, 9:58 AM: - Hi Yun, sure, I'll take a look. As for the mentioned commit, it reverted RocksDB snapshotting back to the older behavior which is slower. So the increase there is expected. was (Author: roman_khachatryan): Hi Yun, sure, I'll take a look. As for the mentioned commit, it reverted the changes back to the older behavior which is slower. So the increase there is expected. > PartiallyFinishedSourcesITCase fails due to timeout > --- > > Key: FLINK-24163 > URL: https://issues.apache.org/jira/browse/FLINK-24163 > Project: Flink > Issue Type: Bug > Components: API / DataStream >Affects Versions: 1.14.0, 1.15.0 >Reporter: Xintong Song >Assignee: Yun Gao >Priority: Blocker > Labels: pull-request-available, test-stability > Fix For: 1.14.0, 1.15.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23529&view=logs&j=4d4a0d10-fca2-5507-8eed-c07f0bdf4887&t=7b25afdf-cc6c-566f-5459-359dc2585798&l=10996 > {code} > Sep 04 04:35:28 [ERROR] Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, > Time elapsed: 155.236 s <<< FAILURE! - in > org.apache.flink.runtime.operators.lifecycle.PartiallyFinishedSourcesITCase > Sep 04 04:35:28 [ERROR] test[complex graph ALL_SUBTASKS, failover: false] > Time elapsed: 65.999 s <<< ERROR! > Sep 04 04:35:28 java.util.concurrent.TimeoutException: Condition was not met > in given timeout. > Sep 04 04:35:28 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:164) > Sep 04 04:35:28 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:142) > Sep 04 04:35:28 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:134) > Sep 04 04:35:28 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitForSubtasksToFinish(CommonTestUtils.java:297) > Sep 04 04:35:28 at > org.apache.flink.runtime.operators.lifecycle.TestJobExecutor.waitForSubtasksToFinish(TestJobExecutor.java:219) > Sep 04 04:35:28 at > org.apache.flink.runtime.operators.lifecycle.PartiallyFinishedSourcesITCase.test(PartiallyFinishedSourcesITCase.java:82) > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)