Re: Re: [DISCUSS] FLIP-147: Support Checkpoints After Tasks Finished
Hi all, I updated the FLIP[1] to reflect the major discussed points in the ML thread: 1) For the "new" root tasks finished before it received trigger message, previously we proposed to let JM re-compute and re-trigger the descendant tasks, but after the discussion we realized that it might cause overhead to JobMaster on cascade finish and large parallelism cases. Another option might be let the StreamTask do one synchronization with the CheckpointCoordinator before get finished to be aware of the missed pending checkpoints, since at then EndOfPartitions are not emitted yet, it could still broadcast barriers to its descendant tasks. I updated the details in this section[2] in the FLIP. 2) For the barrier alignment, now we change to insert faked barriers in the input channels to avoid interference with checkpoint alignment algorithms. One remaining issue is that for unaligned checkpoint mode we could not snapshot the upstream tasks' result partition if it have been finished. One option to address this issue is to make the upstream tasks to wait for buffers get flushed before exit, and we would include this in the future versions. I updated this part in this section[3] in the FLIP. 3) Some operators like Sink Committer need to wait for one complete checkpoint before exit. To support the operators that need to wait for some finalization condition like the Sink committer and Async I/O, we could introduce a new interface to mark this kind of operators, and let the runtime to wait till the operators reached its condition. I updated this part in this section[4] in the FLIP. Could you have another look of the FLIP and the pending issues ? Any feedbacks are warmly welcomed and appreciated. Very thanks! Best, Yun [1] https://cwiki.apache.org/confluence/x/mw-ZCQ [2] https://cwiki.apache.org/confluence/display/FLINK/FLIP-147%3A+Support+Checkpoints+After+Tasks+Finished#FLIP147:SupportCheckpointsAfterTasksFinished-TriggeringCheckpointsAfterTasksFinished [3] https://cwiki.apache.org/confluence/display/FLINK/FLIP-147%3A+Support+Checkpoints+After+Tasks+Finished#FLIP147:SupportCheckpointsAfterTasksFinished-ExtendthetaskBarrierAlignment [4] https://cwiki.apache.org/confluence/display/FLINK/FLIP-147%3A+Support+Checkpoints+After+Tasks+Finished#FLIP147:SupportCheckpointsAfterTasksFinished-SupportOperatorsWaitingForCheckpointCompleteBeforeFinish -- From:Yun Gao Send Time:2021 Jan. 12 (Tue.) 10:30 To:Khachatryan Roman Cc:dev ; user Subject:Re: Re: [DISCUSS] FLIP-147: Support Checkpoints After Tasks Finished Hi Roman, Very thanks for the feedbacks and suggestions! > I think UC will be the common case with multiple sources each with DoP > 1. > IIUC, waiting for EoP will be needed on each subtask each time one of it's source subtask finishes. Yes, waiting for EoP would be required for each input channel if we do not blocking the upstream finished task specially. > Yes, but checkpoint completion notification will not be sent until all the EOPs are processed. The downstream tasked get triggered indeed must wait for received EoPs from all the input channels, I initially compared it with the completely aligned cases and now the remaining execution graph after the trigger task could still taking normal unaligned checkpoint (like if A -> B -> C -> D, A get finished and B get triggered, then B -> C -> D could still taking normal unaligned checkpoint). But still it could not limit the possible max delay. > Not all declines cause job failure, particularly CHECKPOINT_DECLINED_TASK_NOT_READY doesn't. Sorry for mistaken the logic here and CHECKPOINT_DECLINED_TASK_NOT_READY indeed do not cause failure. But since after a failed checkpoint we would have to wait for the checkpoint interval for the next checkpoint, I also agree the following option would be a better one that we try to complete each checkpoint. >> Thus another possible option might be let the upstream task to wait till all >> the pending buffers in the result partition has been flushed before get to >> finish. > This is what I meant by "postpone JM notification from source". Just blocking > the task thread wouldn't add much complexity, though I'm not sure if it would > cause any problems. >> do you think it would be ok for us to view it as an optimization and >> postpone it to future versions ? > I think that's a good idea. And also very sorry for here I should wrongly understand the proposals, and currently I also do not see explicit problems for waiting for the flush of pipeline result partition. Glad that we have the same viewpoints on this issue. :) Best, Yun -- From:Khachatryan Roman Send Time:2021 Jan. 11 (Mon.) 19:14 To:Yun Gao Cc:dev ; user Subject:Re: Re: [DISCUSS] FLIP-147: Support Checkpoints After Tasks F
Re: Re: Idea import Flink source code
Don't forget to use the reply-all button when replying to threads on the mailing lists. :-) Have you tried building the project via command line using `mvn -DskipTests -Dfast install` to pull all dependencies? And just to verify: you didn't change the code, did you? We're talking about the vanilla Flink source code...? Matthias On Wed, Jan 13, 2021 at 9:18 AM penguin. wrote: > Hi, > Thank you for your reminding. > It seems that there is something wrong with putting the picture in the > text. > > ▼Sync: at 2021/1/13 12:05 with 18 errors > > ▼Resolve dependencies 4 errors > Cannot resolve netminidev:json-smart:2.3 > Cannot resolve io.confluent:kafka-schema-registry-client:4.1.0 > Cannot resolve com.nimbusds:nimbus-jose-jwt:9.4.1 > Cannot resolve com.nimbusds:lang-tag:1.5 > ▼Resolve plugins 14 errors > Cannot resolve plugin org.codehaus.mojo:build-helper-maven-plugin: > > > Best, > penguin > > > > > 在 2021-01-13 15:24:22,"Matthias Pohl" 写道: > > Hi, > you might want to move these kinds of questions into the > u...@flink.apache.org which is the mailing list for community support > questions [1]. > Coming back to your question: Is it just me or is the image not > accessible? Could you provide a textual description of your problem? > > Best, > Matthias > > > [1] https://flink.apache.org/community.html#mailing-lists > > On Wed, Jan 13, 2021 at 6:18 AM penguin. wrote: > >> Hello, >> When importing the Flink source code into idea, the following error >> occurred. >> And several mirrors were configured in the setting file of maven, which >> did not solve the problem >> >> >> > > >
Re: [DISCUSS] Releasing Apache Flink 1.10.3
Thanks for starting this discussion, Matthias. +1 for releasing 1.10.3 as it contains a number of important fixes. Best, Xingbo Xintong Song 于2021年1月13日周三 下午3:46写道: > Thanks for bringing this up, Matthias. > > Per the "Update Policy for old releases" [1], normally we do not release > 1.10.x after 1.12.0 is released. However, the policy also says that we are > "open to discussing bugfix releases for even older versions". > > In this case, I'm +1 for releasing 1.10.3, for the dozens of non-released > fixes and the security flaws. > > As a reminder, I'd like to bring up FLINK-20906 [2] to be backported if we > are releasing 1.10.3, which updates the copyright year in NOTICE files to > 2021. > > Thank you~ > > Xintong Song > > > [1] https://flink.apache.org/downloads.html > [2] https://issues.apache.org/jira/browse/FLINK-20906 > > On Tue, Jan 12, 2021 at 7:15 PM Matthias Pohl > wrote: > > > Hi, > > I'd like to initiate a discussion on releasing Flink 1.10.3. There were a > > few requests in favor of this already in [1] and [2]. > > > > I checked the release-1.10 branch: 55 commits are not released, yet. > > Some non-released fixes that might be relevant are: > > - FLINK-20218 [3] - fix "module 'urllib' has no attribute 'parse'" due to > > ProtoBuf version update > > - FLINK-20013 [4] - BoundedBlockingSubpartition may leak network buffer > > - FLINK-19252 [5] - temporary folder is not created when missing > > - FLINK-19557 [6] - LeaderRetrievalListener notification upon ZooKeeper > > reconnection > > - FLINK-19523 [7] - hide sensitive information in logs > > > > In addition to that, we would like to include a backport for > CVE-2020-17518 > > and CVE-2020-17519 to cover the request in [2]. > > > > The travis-ci build chain for release-1.10 seems to be stable [8]. > > Any thoughts on that? Unfortunately, I cannot volunteer as a release > > manager due to the lack of permissions. But I wanted to start the > > discussion, anyway. > > > > Best, > > Matthias > > > > [1] > > > > > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/ANNOUNCE-Weekly-Community-Update-2020-44-45-td46486.html#a47610 > > [2] https://issues.apache.org/jira/browse/FLINK-20875 > > [3] https://issues.apache.org/jira/browse/FLINK-20218 > > [4] https://issues.apache.org/jira/browse/FLINK-20013 > > [5] https://issues.apache.org/jira/browse/FLINK-19252 > > [6] https://issues.apache.org/jira/browse/FLINK-19557 > > [7] https://issues.apache.org/jira/browse/FLINK-19523 > > [8] https://travis-ci.com/github/apache/flink/builds/212749910 > > >
Re:Re: Re: Idea import Flink source code
Hi, I click the reply button every time... Does this mean that only the replied person can see the email? If Maven fails to download plugins or dependencies, is mvn -clean install -DskipTests a must? I'll try first. penguin 在 2021-01-13 16:35:10,"Matthias Pohl" 写道: Don't forget to use the reply-all button when replying to threads on the mailing lists. :-) Have you tried building the project via command line using `mvn -DskipTests -Dfast install` to pull all dependencies? And just to verify: you didn't change the code, did you? We're talking about the vanilla Flink source code...? Matthias On Wed, Jan 13, 2021 at 9:18 AM penguin. wrote: Hi, Thank you for your reminding. It seems that there is something wrong with putting the picture in the text. ▼Sync: at 2021/1/13 12:05 with 18 errors ▼Resolve dependencies 4 errors Cannot resolve netminidev:json-smart:2.3 Cannot resolve io.confluent:kafka-schema-registry-client:4.1.0 Cannot resolve com.nimbusds:nimbus-jose-jwt:9.4.1 Cannot resolve com.nimbusds:lang-tag:1.5 ▼Resolve plugins 14 errors Cannot resolve plugin org.codehaus.mojo:build-helper-maven-plugin: Best, penguin 在 2021-01-13 15:24:22,"Matthias Pohl" 写道: Hi, you might want to move these kinds of questions into the u...@flink.apache.org which is the mailing list for community support questions [1]. Coming back to your question: Is it just me or is the image not accessible? Could you provide a textual description of your problem? Best, Matthias [1] https://flink.apache.org/community.html#mailing-lists On Wed, Jan 13, 2021 at 6:18 AM penguin. wrote: Hello, When importing the Flink source code into idea, the following error occurred. And several mirrors were configured in the setting file of maven, which did not solve the problem
Re: [DISCUSS] Support obtaining Hive delegation tokens when submitting application to Yarn
Hi Jie Wang, thanks for starting this discussion. To me the SPI approach sounds better because it is not as brittle as using reflection. Concerning the configuration, we could think about introducing some Hive specific configuration options which allow us to specify these paths. How are other projects which integrate with Hive are solving this problem? Cheers, Till On Tue, Jan 12, 2021 at 4:13 PM 王 杰 wrote: > Hi everyone, > > Currently, Hive delegation token is not obtained when Flink submits the > application in Yarn mode using kinit way. The ticket is > https://issues.apache.org/jira/browse/FLINK-20714. I'd like to start a > discussion about how to support this feature. > > Maybe we have two options: > 1. Using a reflection way to construct a Hive client to obtain the token, > just same as the org.apache.flink.yarn.Utils.obtainTokenForHBase > implementation. > 2. Introduce a pluggable delegation provider via SPI. Delegation provider > could be placed in connector related code, so reflection is not needed and > is more extendable. > > > > Both options have to handle how to specify the HiveConf to use. In Hive > connector, user could specify both hiveConfDir and hadoopConfDir when > creating HiveCatalog. The hadoopConfDir may not the same as the Hadoop > configuration in HadoopModule. > > Looking forward to your suggestions. > > -- > Best regards! > Jie Wang > >
Re: 7UUNA`SE$DZI74Y)S)T)GZB
Hi Penguin, the attached screenshot is not displayed correctly. Maybe you can post the error to this thread. Cheers, Till On Wed, Jan 13, 2021 at 6:17 AM penguin. wrote: > Hello, > When importing the Flink source code into idea, the following error > occurred. > And several mirrors were configured in the setting file of maven, which > did not solve the problem > > > > > >
Re: [DISCUSS] Releasing Apache Flink 1.10.3
Thanks for starting this discussion Matthias. I agree with all of you that a final 1.10.3 release could be really helpful for our users. Given that CI passes, it shouldn't be too much overhead either. Cheers, Till On Wed, Jan 13, 2021 at 9:45 AM Xingbo Huang wrote: > Thanks for starting this discussion, Matthias. > > +1 for releasing 1.10.3 as it contains a number of important fixes. > > Best, > Xingbo > > Xintong Song 于2021年1月13日周三 下午3:46写道: > > > Thanks for bringing this up, Matthias. > > > > Per the "Update Policy for old releases" [1], normally we do not release > > 1.10.x after 1.12.0 is released. However, the policy also says that we > are > > "open to discussing bugfix releases for even older versions". > > > > In this case, I'm +1 for releasing 1.10.3, for the dozens of > non-released > > fixes and the security flaws. > > > > As a reminder, I'd like to bring up FLINK-20906 [2] to be backported if > we > > are releasing 1.10.3, which updates the copyright year in NOTICE files to > > 2021. > > > > Thank you~ > > > > Xintong Song > > > > > > [1] https://flink.apache.org/downloads.html > > [2] https://issues.apache.org/jira/browse/FLINK-20906 > > > > On Tue, Jan 12, 2021 at 7:15 PM Matthias Pohl > > wrote: > > > > > Hi, > > > I'd like to initiate a discussion on releasing Flink 1.10.3. There > were a > > > few requests in favor of this already in [1] and [2]. > > > > > > I checked the release-1.10 branch: 55 commits are not released, yet. > > > Some non-released fixes that might be relevant are: > > > - FLINK-20218 [3] - fix "module 'urllib' has no attribute 'parse'" due > to > > > ProtoBuf version update > > > - FLINK-20013 [4] - BoundedBlockingSubpartition may leak network buffer > > > - FLINK-19252 [5] - temporary folder is not created when missing > > > - FLINK-19557 [6] - LeaderRetrievalListener notification upon ZooKeeper > > > reconnection > > > - FLINK-19523 [7] - hide sensitive information in logs > > > > > > In addition to that, we would like to include a backport for > > CVE-2020-17518 > > > and CVE-2020-17519 to cover the request in [2]. > > > > > > The travis-ci build chain for release-1.10 seems to be stable [8]. > > > Any thoughts on that? Unfortunately, I cannot volunteer as a release > > > manager due to the lack of permissions. But I wanted to start the > > > discussion, anyway. > > > > > > Best, > > > Matthias > > > > > > [1] > > > > > > > > > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/ANNOUNCE-Weekly-Community-Update-2020-44-45-td46486.html#a47610 > > > [2] https://issues.apache.org/jira/browse/FLINK-20875 > > > [3] https://issues.apache.org/jira/browse/FLINK-20218 > > > [4] https://issues.apache.org/jira/browse/FLINK-20013 > > > [5] https://issues.apache.org/jira/browse/FLINK-19252 > > > [6] https://issues.apache.org/jira/browse/FLINK-19557 > > > [7] https://issues.apache.org/jira/browse/FLINK-19523 > > > [8] https://travis-ci.com/github/apache/flink/builds/212749910 > > > > > >
Re: 7UUNA`SE$DZI74Y)S)T)GZB
FYI: This was a duplicate post. The discussion continued in [1]. Matthias [1] http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Idea-import-Flink-source-code-td40597.html On Wed, Jan 13, 2021 at 9:54 AM Till Rohrmann wrote: > Hi Penguin, > > the attached screenshot is not displayed correctly. Maybe you can post the > error to this thread. > > Cheers, > Till > > On Wed, Jan 13, 2021 at 6:17 AM penguin. wrote: > > > Hello, > > When importing the Flink source code into idea, the following error > > occurred. > > And several mirrors were configured in the setting file of maven, which > > did not solve the problem > > > > > > > > > > > >
Re: Re: Re: Idea import Flink source code
The mvn command helps to identify whether your issue is related to Maven and/or missing dependencies or whether it's an Intellij problem. Usually, running `mvn clean install -DskipTests -Dfast` isn't required to import the Flink project into Intellij. Best, Matthias PS: reply adds only the immediate responder to the recipient lists (as happened in your first reply) vs reply-all would also automatically add the ML email address(es) (and other thread participants) to the CC list. On Wed, Jan 13, 2021 at 9:49 AM penguin. wrote: > Hi, > I click the reply button every time... Does this mean that only the > replied person can see the email? > > > If Maven fails to download plugins or dependencies, is mvn -clean > install -DskipTests a must? > I'll try first. > > penguin > > > > > 在 2021-01-13 16:35:10,"Matthias Pohl" 写道: > > Don't forget to use the reply-all button when replying to threads on the > mailing lists. :-) > > Have you tried building the project via command line using `mvn > -DskipTests -Dfast install` to pull all dependencies? > And just to verify: you didn't change the code, did you? We're talking > about the vanilla Flink source code...? > > Matthias > > On Wed, Jan 13, 2021 at 9:18 AM penguin. wrote: > >> Hi, >> Thank you for your reminding. >> It seems that there is something wrong with putting the picture in the >> text. >> >> ▼Sync: at 2021/1/13 12:05 with 18 errors >> >> ▼Resolve dependencies 4 errors >> Cannot resolve netminidev:json-smart:2.3 >> Cannot resolve io.confluent:kafka-schema-registry-client:4.1.0 >> Cannot resolve com.nimbusds:nimbus-jose-jwt:9.4.1 >> Cannot resolve com.nimbusds:lang-tag:1.5 >> ▼Resolve plugins 14 errors >> Cannot resolve plugin org.codehaus.mojo:build-helper-maven-plugin: >> >> >> Best, >> penguin >> >> >> >> >> 在 2021-01-13 15:24:22,"Matthias Pohl" 写道: >> >> Hi, >> you might want to move these kinds of questions into the >> u...@flink.apache.org which is the mailing list for community support >> questions [1]. >> Coming back to your question: Is it just me or is the image not >> accessible? Could you provide a textual description of your problem? >> >> Best, >> Matthias >> >> >> [1] https://flink.apache.org/community.html#mailing-lists >> >> On Wed, Jan 13, 2021 at 6:18 AM penguin. wrote: >> >>> Hello, >>> When importing the Flink source code into idea, the following error >>> occurred. >>> And several mirrors were configured in the setting file of maven, which >>> did not solve the problem >>> >>> >>>
[jira] [Created] (FLINK-20954) Optimize State with Cross Bundle State Cache In PyFlink
Huang Xingbo created FLINK-20954: Summary: Optimize State with Cross Bundle State Cache In PyFlink Key: FLINK-20954 URL: https://issues.apache.org/jira/browse/FLINK-20954 Project: Flink Issue Type: Improvement Components: API / Python Affects Versions: 1.13.0 Reporter: Huang Xingbo Fix For: 1.13.0 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-20955) Refactor HBase Source in accordance with FLIP-27
Moritz Manner created FLINK-20955: - Summary: Refactor HBase Source in accordance with FLIP-27 Key: FLINK-20955 URL: https://issues.apache.org/jira/browse/FLINK-20955 Project: Flink Issue Type: Improvement Components: Connectors / HBase Reporter: Moritz Manner Fix For: 1.12.0 The HBase connector source implementation should be updated in accordance with [FLIP-27: Refactor Source Interface|https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface]. One source should map to one table in HBase. Users can specify which column[families] to watch; each change in one of the columns triggers a record with change type, table, column family, column, value, and timestamp. h3. Idea The new Flink HBase Source makes use of the internal [replication mechanism of HBase|https://hbase.apache.org/book.html#_cluster_replication]. The Source is registering at the HBase cluster and will receive all WAL edits written in HBase. From those WAL edits the Source can create the DataStream. h3. Split We're still not 100% sure which information a Split should contain. We have the following possibilities: # There is only one Split per Source and the Split contains all the necessary information to connect with HBase. The SourceReader which processes the Split will receive all WAL edits for all tables and filters the relevant edits. # There are multiple Splits per Source, each Split representing one HBase Region to read from. This assumes that it is possible to only receive WAL edits from a specific HBase Region and not receive all WAL edits. This would be preferable as it allows parallel processing of multiple regions, but we still need to figure out how this is possible. In both cases the Split will contain information about the HBase instance and table. h3. Split Enumerator Depending on which Split we'll decide on, the split enumerator will connect to HBase and get all relevant regions or just create one Split. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-20956) SQLClientSchemaRegistryITCase.testWriting unstable
Till Rohrmann created FLINK-20956: - Summary: SQLClientSchemaRegistryITCase.testWriting unstable Key: FLINK-20956 URL: https://issues.apache.org/jira/browse/FLINK-20956 Project: Flink Issue Type: Bug Components: Table SQL / Client Affects Versions: 1.13.0 Reporter: Till Rohrmann Fix For: 1.13.0 The {{SQLClientSchemaRegistryITCase.testWriting}} failed on AZP with: {code} Jan 12 19:30:26 [ERROR] testWriting(org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase) Time elapsed: 145.834 s <<< ERROR! Jan 12 19:30:26 org.junit.runners.model.TestTimedOutException: test timed out after 12 milliseconds Jan 12 19:30:26 at java.lang.Object.wait(Native Method) Jan 12 19:30:26 at java.lang.Thread.join(Thread.java:1252) Jan 12 19:30:26 at java.lang.Thread.join(Thread.java:1326) Jan 12 19:30:26 at org.apache.kafka.clients.admin.KafkaAdminClient.close(KafkaAdminClient.java:541) Jan 12 19:30:26 at org.apache.kafka.clients.admin.Admin.close(Admin.java:96) Jan 12 19:30:26 at org.apache.kafka.clients.admin.Admin.close(Admin.java:79) Jan 12 19:30:26 at org.apache.flink.tests.util.kafka.KafkaContainerClient.createTopic(KafkaContainerClient.java:71) Jan 12 19:30:26 at org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase.testWriting(SQLClientSchemaRegistryITCase.java:168) Jan 12 19:30:26 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) Jan 12 19:30:26 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) Jan 12 19:30:26 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) Jan 12 19:30:26 at java.lang.reflect.Method.invoke(Method.java:498) Jan 12 19:30:26 at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) Jan 12 19:30:26 at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) Jan 12 19:30:26 at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) Jan 12 19:30:26 at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) Jan 12 19:30:26 at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) Jan 12 19:30:26 at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) Jan 12 19:30:26 at java.util.concurrent.FutureTask.run(FutureTask.java:266) Jan 12 19:30:26 at java.lang.Thread.run(Thread.java:748) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
Re: [ANNOUNCE] Welcome Danny Cranmer as a new Apache Flink Committer
Thankyou everyone for the welcome! I look forward to working with more of you in the near future. Danny, On 13/01/2021, 07:38, "Yun Gao" wrote: CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe. Congratulations, Danny! Best, Yun -- From:Xintong Song Send Time:2021 Jan. 13 (Wed.) 15:29 To:dev Subject:Re: [ANNOUNCE] Welcome Danny Cranmer as a new Apache Flink Committer Congratulations, Danny. Welcome aboard. Thank you~ Xintong Song On Wed, Jan 13, 2021 at 3:25 PM Yun Tang wrote: > Congratulations Danny! > > Best > Yun Tang > > From: Jingsong Li > Sent: Wednesday, January 13, 2021 14:23 > To: dev > Subject: Re: [ANNOUNCE] Welcome Danny Cranmer as a new Apache Flink > Committer > > Congrats, Danny! > > Best, > Jingsong > > On Tue, Jan 12, 2021 at 7:55 PM Leonard Xu wrote: > > > Congratulations Danny! > > > > Best, > > Leonard > > > 在 2021年1月12日,19:44,Wei Zhong 写道: > > > > > > Congratulations Danny! > > > > > > Best, > > > Wei > > > > > >> 在 2021年1月12日,19:09,hailongwang <18868816...@163.com> 写道: > > >> > > >> Congratulations Danny! > > >> > > >> Best, > > >> Hailong在 2021-01-12 17:05:31,"Jark Wu" 写道: > > >>> Congratulations Danny! > > >>> > > >>> Best, > > >>> Jark > > >>> > > >>> On Tue, 12 Jan 2021 at 17:59, Yangze Guo wrote: > > >>> > > Congrats, Danny! > > > > Best, > > Yangze Guo > > > > On Tue, Jan 12, 2021 at 5:55 PM Xingbo Huang > > wrote: > > > > > > Congratulations, Danny! > > > > > > Best, > > > Xingbo > > > > > > Dian Fu 于2021年1月12日周二 下午5:48写道: > > > > > >> Congratulations, Danny! > > >> > > >> Regards, > > >> Dian > > >> > > >>> 在 2021年1月12日,下午5:40,Till Rohrmann 写道: > > >>> > > >>> Congrats and welcome Danny! > > >>> > > >>> Cheers, > > >>> Till > > >>> > > >>> On Tue, Jan 12, 2021 at 10:09 AM Dawid Wysakowicz < > > >> dwysakow...@apache.org> > > >>> wrote: > > >>> > > Congratulations, Danny! > > > > Best, > > > > Dawid > > > > On 12/01/2021 09:52, Paul Lam wrote: > > > Congrats, Danny! > > > > > > Best, > > > Paul Lam > > > > > >> 2021年1月12日 16:48,Tzu-Li (Gordon) Tai > 写道: > > >> > > >> Hi everyone, > > >> > > >> I'm very happy to announce that the Flink PMC has accepted > Danny > > Cranmer to > > >> become a committer of the project. > > >> > > >> Danny has been extremely helpful on the mailing lists with > > answering > > user > > >> questions on the AWS Kinesis connector, and has driven > numerous > > new > > >> features and timely bug fixes for the connector as well. > > >> > > >> Please join me in welcoming Danny :) > > >> > > >> Cheers, > > >> Gordon > > > > > > > > > >> > > >> > > > > > > > > > > > -- > Best, Jingsong Lee >
[jira] [Created] (FLINK-20957) StreamTaskTest.testProcessWithUnAvailableOutput unstable
Till Rohrmann created FLINK-20957: - Summary: StreamTaskTest.testProcessWithUnAvailableOutput unstable Key: FLINK-20957 URL: https://issues.apache.org/jira/browse/FLINK-20957 Project: Flink Issue Type: Bug Components: Runtime / Task Affects Versions: 1.13.0 Reporter: Till Rohrmann Fix For: 1.13.0 The {{StreamTaskTest.testProcessWithUnAvailableOutput}} fails on AZP. {code} [ERROR] testProcessWithUnAvailableOutput(org.apache.flink.streaming.runtime.tasks.StreamTaskTest) Time elapsed: 0.145 s <<< FAILURE! java.lang.AssertionError: Expected: a value equal to or greater than <42L> but: <41L> was less than <42L> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) at org.junit.Assert.assertThat(Assert.java:956) at org.junit.Assert.assertThat(Assert.java:923) at org.apache.flink.streaming.runtime.tasks.StreamTaskTest.testProcessWithUnAvailableOutput(StreamTaskTest.java:1333) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) {code} cc [~pnowojski] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-20958) Starting flink doc build script in preview mode occur version error
ZhaoLiming created FLINK-20958: -- Summary: Starting flink doc build script in preview mode occur version error Key: FLINK-20958 URL: https://issues.apache.org/jira/browse/FLINK-20958 Project: Flink Issue Type: Improvement Environment: macOS Mojava 10.14.6 Ruby:2.3.7 Reporter: ZhaoLiming error msg: [DEPRECATED] The `--path` flag is deprecated because it relies on being remembered across bundler invocations, which bundler will no longer do in future versions. Instead please use `bundle config set --local path '.rubydeps'`, and stop using this flag Your Ruby version is 2.3.7, but your Gemfile specified >= 2.4.0 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-20959) How to close Apache Flink REST API
wuchangwen created FLINK-20959: -- Summary: How to close Apache Flink REST API Key: FLINK-20959 URL: https://issues.apache.org/jira/browse/FLINK-20959 Project: Flink Issue Type: Bug Components: API / Core Affects Versions: 1.10.2 Reporter: wuchangwen Fix For: 1.10.2 Apache Flink 1.10.2 has CVE-2020-17518 vulnerability in the REST API. Now that I want to turn off the REST API service, how should I set up the configuration file? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-20960) Add warning in 1.12 release notes about potential corrupt data stream with unaligned checkpoint
Xintong Song created FLINK-20960: Summary: Add warning in 1.12 release notes about potential corrupt data stream with unaligned checkpoint Key: FLINK-20960 URL: https://issues.apache.org/jira/browse/FLINK-20960 Project: Flink Issue Type: Task Components: Documentation Affects Versions: 1.12.0 Reporter: Xintong Song Assignee: Xintong Song Fix For: 1.13.0, 1.12.1 -- This message was sent by Atlassian Jira (v8.3.4#803005)
Re: [DISCUSS] Backport broadcast operations in BATCH mode to Flink
Hi, Given that the BATCH execution mode was only released in 1.12 and a rather small impact of the suggested change I'd be ok with backporting it to 1.12.x. Best, Dawid On 07/01/2021 12:50, Kostas Kloudas wrote: > +1 on my side as it does not break anything and it can act as motivation > for some people to upgrade. > > Cheers, > Kostas > > On Thu, 7 Jan 2021, 12:39 Aljoscha Krettek, wrote: > >> 1.12.x >> Reply-To: >> >> Hi, >> >> what do you think about backporting FLINK-20491 [1] to Flink 1.12.x? >> >> I (we, including Dawid and Kostas) are a bit torn on this. >> >> a) It's a limitation of Flink 1.12.0 and fixing this seems very good for >> users that would otherwise have to wait until Flink 1.13.0. >> >> b) It's technically a new feature. We allow something with this change >> where previously an `UnsupportedOperationException` would be thrown. >> >> I would lean towards backporting this to 1.12.x. Thoughts? >> >> Best, >> Aljoscha >> >> [1] https://issues.apache.org/jira/browse/FLINK-20491 >> >> >> signature.asc Description: OpenPGP digital signature
[jira] [Created] (FLINK-20961) Flink throws NullPointerException for tables created from DataStream with no assigned timestamps and watermarks
Yuval Itzchakov created FLINK-20961: --- Summary: Flink throws NullPointerException for tables created from DataStream with no assigned timestamps and watermarks Key: FLINK-20961 URL: https://issues.apache.org/jira/browse/FLINK-20961 Project: Flink Issue Type: Bug Components: Table SQL / Runtime Affects Versions: 1.12.0 Reporter: Yuval Itzchakov Given the following program: {code:java} //import org.apache.flink.api.common.eventtime.{ SerializableTimestampAssigner, WatermarkStrategy } import org.apache.flink.streaming.api.functions.source.SourceFunction import org.apache.flink.streaming.api.scala.{StreamExecutionEnvironment, _} import org.apache.flink.streaming.api.watermark.Watermark import org.apache.flink.table.annotation.{DataTypeHint, FunctionHint} import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment import org.apache.flink.table.api.{$, AnyWithOperations} import org.apache.flink.table.functions.{AggregateFunction, ScalarFunction} import java.time.Instant object BugRepro { def text: String = s""" |{ | "s": "hello", | "i": ${Random.nextInt()} |} |""".stripMargin def main(args: Array[String]): Unit = { val flink = StreamExecutionEnvironment.createLocalEnvironment() val tableEnv = StreamTableEnvironment.create(flink) val dataStream = flink .addSource { new SourceFunction[(Long, String)] { var isRunning = true override def run(ctx: SourceFunction.SourceContext[(Long, String)]): Unit = while (isRunning) { val x = (Instant.now().toEpochMilli, text) ctx.collect(x) ctx.emitWatermark(new Watermark(x._1)) Thread.sleep(300) } override def cancel(): Unit = isRunning = false } } // .assignTimestampsAndWatermarks( //WatermarkStrategy // .forBoundedOutOfOrderness[(Long, String)](Duration.ofSeconds(30)) // .withTimestampAssigner { //new SerializableTimestampAssigner[(Long, String)] { // override def extractTimestamp(element: (Long, String), recordTimestamp: Long): Long = //element._1 //} // } // ) // tableEnv.createTemporaryView("testview", dataStream, $("event_time").rowtime(), $("json_text")) val res = tableEnv.sqlQuery(""" |SELECT json_text |FROM testview |""".stripMargin) val sink = tableEnv.executeSql( """ |CREATE TABLE SINK ( | json_text STRING |) |WITH ( | 'connector' = 'print' |) |""".stripMargin )res.executeInsert("SINK").await() () } res.executeInsert("SINK").await() {code} Flink will throw a NullPointerException at runtime: {code:java} Caused by: java.lang.NullPointerExceptionCaused by: java.lang.NullPointerException at SourceConversion$3.processElement(Unknown Source) at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:71) at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:46) at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:26) at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52) at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30) at org.apache.flink.streaming.api.operators.StreamSourceContexts$ManualWatermarkContext.processAndCollect(StreamSourceContexts.java:305) at org.apache.flink.streaming.api.operators.StreamSourceContexts$WatermarkContext.collect(StreamSourceContexts.java:394) at ai.hunters.pipeline.BugRepro$$anon$1.run(BugRepro.scala:78) at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100) at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63) at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:215) {code} This is due to the fact that the DataStream did not assign a timestamp to the underlying source. This is the generated code: {code:java} public class SourceConversion$3 extends org.apache.flink.table.runtime.operators.AbstractProcessStreamOperator implements org.apache.flink.streaming.api.operators.OneInputStreamOperator { private final Object[] references; private transient org.apache.flink.table.data.util.DataFormatConverters.CaseClassConverter converter$0; org.apache.flink.table.data.GenericRowData out = new org.apache.flink.table.data.GenericRowData(2); private final org.apache.flink.stre
InfluxDB Source Sink Connector
Hi all, we want to contribute a connector for InfluxDB v2.0. In the following, we will briefly describe our planned architecture. We happily appreciate any comments on our plans. Background InfluxDB is a time-series database created by InfluxData. InfluxData offers an open source, enterprise, and cloud version. The open source version can be deployed on a single node, whereas the enterprise version supports horizontal scaling. The connector focuses on the open source version of InfluxDB. In particular, InfluxDB underwent significant architecture changes from v1 to v2, and we'll focus on the current v2 only. Architecture The following we propose a source and sink architecture using the new Flink connector APIs. Source The Flink InfluxDB source connector will implement an InfluxDB write endpoint that accepts line protocol-formatted data points for an InfluxDB instance, i.e., existing tools like Telegraf will be compatible with this source connector out of the box. In conclusion, we replicate the InfluxDB write endpoint in our source by providing an HTTP REST API with one endpoint, i.e., `POST /api/v2/write`. Sink The Flink InfluxDB sink connector builds upon the principles of InfluxDB. Although InfluxDB does not support any transactions, it does not store duplicate data points. InfluxDB identifies unique data points by their measurement, tag set, and timestamp. Hence, to guarantee an exactly-once semantic, we can simply write to InfluxDB. Finally, we are highly interested in specific use case ideas. Yet, we are thinking of real-time IoT sensor analytics but please reach out to us if you have any in your mind. Best regards,Ramin Gharib, Felix Seidel, and Leon Papke -- Sent from: http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/
Re: [ANNOUNCE] Welcome Danny Cranmer as a new Apache Flink Committer
Congrats and welcome, Danny! Best Regards, Yu On Wed, 13 Jan 2021 at 15:37, Yun Gao wrote: > Congratulations, Danny! > > Best, > Yun > > > -- > From:Xintong Song > Send Time:2021 Jan. 13 (Wed.) 15:29 > To:dev > Subject:Re: [ANNOUNCE] Welcome Danny Cranmer as a new Apache Flink > Committer > > Congratulations, Danny. > Welcome aboard. > > Thank you~ > > Xintong Song > > > > On Wed, Jan 13, 2021 at 3:25 PM Yun Tang wrote: > > > Congratulations Danny! > > > > Best > > Yun Tang > > > > From: Jingsong Li > > Sent: Wednesday, January 13, 2021 14:23 > > To: dev > > Subject: Re: [ANNOUNCE] Welcome Danny Cranmer as a new Apache Flink > > Committer > > > > Congrats, Danny! > > > > Best, > > Jingsong > > > > On Tue, Jan 12, 2021 at 7:55 PM Leonard Xu wrote: > > > > > Congratulations Danny! > > > > > > Best, > > > Leonard > > > > 在 2021年1月12日,19:44,Wei Zhong 写道: > > > > > > > > Congratulations Danny! > > > > > > > > Best, > > > > Wei > > > > > > > >> 在 2021年1月12日,19:09,hailongwang <18868816...@163.com> 写道: > > > >> > > > >> Congratulations Danny! > > > >> > > > >> Best, > > > >> Hailong在 2021-01-12 17:05:31,"Jark Wu" 写道: > > > >>> Congratulations Danny! > > > >>> > > > >>> Best, > > > >>> Jark > > > >>> > > > >>> On Tue, 12 Jan 2021 at 17:59, Yangze Guo > wrote: > > > >>> > > > Congrats, Danny! > > > > > > Best, > > > Yangze Guo > > > > > > On Tue, Jan 12, 2021 at 5:55 PM Xingbo Huang > > > wrote: > > > > > > > > Congratulations, Danny! > > > > > > > > Best, > > > > Xingbo > > > > > > > > Dian Fu 于2021年1月12日周二 下午5:48写道: > > > > > > > >> Congratulations, Danny! > > > >> > > > >> Regards, > > > >> Dian > > > >> > > > >>> 在 2021年1月12日,下午5:40,Till Rohrmann 写道: > > > >>> > > > >>> Congrats and welcome Danny! > > > >>> > > > >>> Cheers, > > > >>> Till > > > >>> > > > >>> On Tue, Jan 12, 2021 at 10:09 AM Dawid Wysakowicz < > > > >> dwysakow...@apache.org> > > > >>> wrote: > > > >>> > > > Congratulations, Danny! > > > > > > Best, > > > > > > Dawid > > > > > > On 12/01/2021 09:52, Paul Lam wrote: > > > > Congrats, Danny! > > > > > > > > Best, > > > > Paul Lam > > > > > > > >> 2021年1月12日 16:48,Tzu-Li (Gordon) Tai > > 写道: > > > >> > > > >> Hi everyone, > > > >> > > > >> I'm very happy to announce that the Flink PMC has accepted > > Danny > > > Cranmer to > > > >> become a committer of the project. > > > >> > > > >> Danny has been extremely helpful on the mailing lists with > > > answering > > > user > > > >> questions on the AWS Kinesis connector, and has driven > > numerous > > > new > > > >> features and timely bug fixes for the connector as well. > > > >> > > > >> Please join me in welcoming Danny :) > > > >> > > > >> Cheers, > > > >> Gordon > > > > > > > > > > > > > >> > > > >> > > > > > > > > > > > > > > > > > -- > > Best, Jingsong Lee > > >
Re: [DISCUSS] Releasing Apache Flink 1.10.3
+1 for having a bugfix release for the 1.10 branch to fix the security issue. Thanks for driving the discussion Matthias! Minor: CVE-2020-17519 is introduced by 1.11.0 [1] so we don't need to fix it in 1.10.3, but CVE-2020-17518 [2] is needed. Best Regards, Yu [1] https://s.apache.org/CVE-2020-17519 [2] https://s.apache.org/CVE-2020-17518 On Wed, 13 Jan 2021 at 16:57, Till Rohrmann wrote: > Thanks for starting this discussion Matthias. I agree with all of you that > a final 1.10.3 release could be really helpful for our users. Given that CI > passes, it shouldn't be too much overhead either. > > Cheers, > Till > > On Wed, Jan 13, 2021 at 9:45 AM Xingbo Huang wrote: > > > Thanks for starting this discussion, Matthias. > > > > +1 for releasing 1.10.3 as it contains a number of important fixes. > > > > Best, > > Xingbo > > > > Xintong Song 于2021年1月13日周三 下午3:46写道: > > > > > Thanks for bringing this up, Matthias. > > > > > > Per the "Update Policy for old releases" [1], normally we do not > release > > > 1.10.x after 1.12.0 is released. However, the policy also says that we > > are > > > "open to discussing bugfix releases for even older versions". > > > > > > In this case, I'm +1 for releasing 1.10.3, for the dozens of > > non-released > > > fixes and the security flaws. > > > > > > As a reminder, I'd like to bring up FLINK-20906 [2] to be backported if > > we > > > are releasing 1.10.3, which updates the copyright year in NOTICE files > to > > > 2021. > > > > > > Thank you~ > > > > > > Xintong Song > > > > > > > > > [1] https://flink.apache.org/downloads.html > > > [2] https://issues.apache.org/jira/browse/FLINK-20906 > > > > > > On Tue, Jan 12, 2021 at 7:15 PM Matthias Pohl > > > wrote: > > > > > > > Hi, > > > > I'd like to initiate a discussion on releasing Flink 1.10.3. There > > were a > > > > few requests in favor of this already in [1] and [2]. > > > > > > > > I checked the release-1.10 branch: 55 commits are not released, yet. > > > > Some non-released fixes that might be relevant are: > > > > - FLINK-20218 [3] - fix "module 'urllib' has no attribute 'parse'" > due > > to > > > > ProtoBuf version update > > > > - FLINK-20013 [4] - BoundedBlockingSubpartition may leak network > buffer > > > > - FLINK-19252 [5] - temporary folder is not created when missing > > > > - FLINK-19557 [6] - LeaderRetrievalListener notification upon > ZooKeeper > > > > reconnection > > > > - FLINK-19523 [7] - hide sensitive information in logs > > > > > > > > In addition to that, we would like to include a backport for > > > CVE-2020-17518 > > > > and CVE-2020-17519 to cover the request in [2]. > > > > > > > > The travis-ci build chain for release-1.10 seems to be stable [8]. > > > > Any thoughts on that? Unfortunately, I cannot volunteer as a release > > > > manager due to the lack of permissions. But I wanted to start the > > > > discussion, anyway. > > > > > > > > Best, > > > > Matthias > > > > > > > > [1] > > > > > > > > > > > > > > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/ANNOUNCE-Weekly-Community-Update-2020-44-45-td46486.html#a47610 > > > > [2] https://issues.apache.org/jira/browse/FLINK-20875 > > > > [3] https://issues.apache.org/jira/browse/FLINK-20218 > > > > [4] https://issues.apache.org/jira/browse/FLINK-20013 > > > > [5] https://issues.apache.org/jira/browse/FLINK-19252 > > > > [6] https://issues.apache.org/jira/browse/FLINK-19557 > > > > [7] https://issues.apache.org/jira/browse/FLINK-19523 > > > > [8] https://travis-ci.com/github/apache/flink/builds/212749910 > > > > > > > > > >
[jira] [Created] (FLINK-20962) Rewrite the example in 'flink-python/pyflink/shell.py'
Wei Zhong created FLINK-20962: - Summary: Rewrite the example in 'flink-python/pyflink/shell.py' Key: FLINK-20962 URL: https://issues.apache.org/jira/browse/FLINK-20962 Project: Flink Issue Type: Improvement Components: API / Python Reporter: Wei Zhong Currently the pyflink example in 'flink-python/pyflink/shell.py' was added in version 1.9 and has not been updated since. We need to rewrite it with the latest recommended API. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-20963) Rewrite the example under 'flink-python/pyflink/table/examples'
Wei Zhong created FLINK-20963: - Summary: Rewrite the example under 'flink-python/pyflink/table/examples' Key: FLINK-20963 URL: https://issues.apache.org/jira/browse/FLINK-20963 Project: Flink Issue Type: Improvement Components: API / Python Reporter: Wei Zhong Currently the example under 'flink-python/pyflink/table/examples' still uses the deprecated APIs. We need to rewrite it with the latest recommended API. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-20964) Introduce PythonStreamGroupWindowAggregateOperator
Huang Xingbo created FLINK-20964: Summary: Introduce PythonStreamGroupWindowAggregateOperator Key: FLINK-20964 URL: https://issues.apache.org/jira/browse/FLINK-20964 Project: Flink Issue Type: Sub-task Components: API / Python Affects Versions: 1.13.0 Reporter: Huang Xingbo Fix For: 1.13.0 Adds PythonStreamGroupWindowAggregateOperator to support running General Python Stream Group Window Aggregate Function -- This message was sent by Atlassian Jira (v8.3.4#803005)
[DISCUSS] Planning Flink 1.13
Hi all, With the 1.12 being released some time ago already I thought it would be a good time to kickstart the 1.13 release cycle. What do you think about Guowei and me being the release managers for Flink 1.13? We are happy to volunteer for it. The second topic I wanted to raise was the rough timeline for the release. According to our usual 3 months + the release testing/stabilising period we should aim with the feature freeze for the end of March/beginning of April. Does that work for everyone? Let me know what you think. Best, Dawid signature.asc Description: OpenPGP digital signature
[DISCUSS] FLIP-157 Migrate Flink Documentation from Jekyll to Hugo
Hi All, I would like to start a discussion for FLIP-157: Migrating the Flink docs from Jekyll to Hugo. This will allow us: - Proper internationalization - Working Search - Sub-second build time ;) Please take a look and let me know what you think. Seth https://cwiki.apache.org/confluence/display/FLINK/FLIP-157+Migrate+Flink+Documentation+from+Jekyll+to+Hugo
Re: [DISCUSS] Backport broadcast operations in BATCH mode to Flink
+1 I would hope this helps attract more early adopters so if there are issues we can resolve them in time for 1.13. Seth On Wed, Jan 13, 2021 at 5:13 AM Dawid Wysakowicz wrote: > Hi, > > Given that the BATCH execution mode was only released in 1.12 and a > rather small impact of the suggested change I'd be ok with backporting > it to 1.12.x. > > Best, > > Dawid > > On 07/01/2021 12:50, Kostas Kloudas wrote: > > +1 on my side as it does not break anything and it can act as motivation > > for some people to upgrade. > > > > Cheers, > > Kostas > > > > On Thu, 7 Jan 2021, 12:39 Aljoscha Krettek, wrote: > > > >> 1.12.x > >> Reply-To: > >> > >> Hi, > >> > >> what do you think about backporting FLINK-20491 [1] to Flink 1.12.x? > >> > >> I (we, including Dawid and Kostas) are a bit torn on this. > >> > >> a) It's a limitation of Flink 1.12.0 and fixing this seems very good for > >> users that would otherwise have to wait until Flink 1.13.0. > >> > >> b) It's technically a new feature. We allow something with this change > >> where previously an `UnsupportedOperationException` would be thrown. > >> > >> I would lean towards backporting this to 1.12.x. Thoughts? > >> > >> Best, > >> Aljoscha > >> > >> [1] https://issues.apache.org/jira/browse/FLINK-20491 > >> > >> > >> > >
Re: [DISCUSS] FLIP-157 Migrate Flink Documentation from Jekyll to Hugo
+1 to do this. I really like what you have build and the advantages to Jekyll seem overwhelming to me. Hugo is very flexible and I've seen a few other projects use it successfully for docs. On Wed, Jan 13, 2021, at 5:14 PM, Seth Wiesman wrote: > Hi All, > > I would like to start a discussion for FLIP-157: Migrating the Flink docs > from Jekyll to Hugo. > > This will allow us: > >- Proper internationalization >- Working Search >- Sub-second build time ;) > > Please take a look and let me know what you think. > > Seth > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-157+Migrate+Flink+Documentation+from+Jekyll+to+Hugo >
Re: [VOTE] Release 1.12.1, release candidate #2
+1 - no dependencies have been changed requiring license updates - nothing seems to be missing from the maven repository - verified checksums/signatures On 1/10/2021 2:23 AM, Xintong Song wrote: Hi everyone, Please review and vote on the release candidate #2 for the version 1.12.1, as follows: [ ] +1, Approve the release [ ] -1, Do not approve the release (please provide specific comments) The complete staging area is available for your review, which includes: * JIRA release notes [1], * the official Apache source release and binary convenience releases to be deployed to dist.apache.org [2], which are signed with the key with fingerprint F8E419AA0B60C28879E876859DFF40967ABFC5A4 [3], * all artifacts to be deployed to the Maven Central Repository [4], * source code tag "release-1.12.1-rc2" [5], * website pull request listing the new release and adding announcement blog post [6]. The vote will be open for at least 72 hours. It is adopted by majority approval, with at least 3 PMC affirmative votes. Thanks, Xintong Song [1] https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12349459 [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.12.1-rc2 [3] https://dist.apache.org/repos/dist/release/flink/KEYS [4] https://repository.apache.org/content/repositories/orgapacheflink-1411 [5] https://github.com/apache/flink/releases/tag/release-1.12.1-rc2 [6] https://github.com/apache/flink-web/pull/405
Re: [DISCUSS] Backport broadcast operations in BATCH mode to Flink
+1 Best, Kurt On Thu, Jan 14, 2021 at 12:25 AM Seth Wiesman wrote: > +1 > > I would hope this helps attract more early adopters so if there are issues > we can resolve them in time for 1.13. > > Seth > > On Wed, Jan 13, 2021 at 5:13 AM Dawid Wysakowicz > wrote: > > > Hi, > > > > Given that the BATCH execution mode was only released in 1.12 and a > > rather small impact of the suggested change I'd be ok with backporting > > it to 1.12.x. > > > > Best, > > > > Dawid > > > > On 07/01/2021 12:50, Kostas Kloudas wrote: > > > +1 on my side as it does not break anything and it can act as > motivation > > > for some people to upgrade. > > > > > > Cheers, > > > Kostas > > > > > > On Thu, 7 Jan 2021, 12:39 Aljoscha Krettek, > wrote: > > > > > >> 1.12.x > > >> Reply-To: > > >> > > >> Hi, > > >> > > >> what do you think about backporting FLINK-20491 [1] to Flink 1.12.x? > > >> > > >> I (we, including Dawid and Kostas) are a bit torn on this. > > >> > > >> a) It's a limitation of Flink 1.12.0 and fixing this seems very good > for > > >> users that would otherwise have to wait until Flink 1.13.0. > > >> > > >> b) It's technically a new feature. We allow something with this change > > >> where previously an `UnsupportedOperationException` would be thrown. > > >> > > >> I would lean towards backporting this to 1.12.x. Thoughts? > > >> > > >> Best, > > >> Aljoscha > > >> > > >> [1] https://issues.apache.org/jira/browse/FLINK-20491 > > >> > > >> > > >> > > > > >
Re: [VOTE] Release 1.12.1, release candidate #2
+1 (non-binding) - verified checksums and signatures - no binaries found in source archive - build from source - played with a couple of example jobs - played with various deployment modes - webui and logs look good On Thu, Jan 14, 2021 at 1:02 AM Chesnay Schepler wrote: > +1 > > - no dependencies have been changed requiring license updates > - nothing seems to be missing from the maven repository > - verified checksums/signatures > > On 1/10/2021 2:23 AM, Xintong Song wrote: > > Hi everyone, > > > > Please review and vote on the release candidate #2 for the version > 1.12.1, > > as follows: > > > > [ ] +1, Approve the release > > [ ] -1, Do not approve the release (please provide specific comments) > > > > The complete staging area is available for your review, which includes: > > * JIRA release notes [1], > > * the official Apache source release and binary convenience releases to > be > > deployed to dist.apache.org [2], which are signed with the key with > > fingerprint F8E419AA0B60C28879E876859DFF40967ABFC5A4 [3], > > * all artifacts to be deployed to the Maven Central Repository [4], > > * source code tag "release-1.12.1-rc2" [5], > > * website pull request listing the new release and adding announcement > blog > > post [6]. > > > > The vote will be open for at least 72 hours. It is adopted by majority > > approval, with at least 3 PMC affirmative votes. > > > > Thanks, > > Xintong Song > > > > [1] > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12349459 > > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.12.1-rc2 > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS > > [4] > https://repository.apache.org/content/repositories/orgapacheflink-1411 > > [5] https://github.com/apache/flink/releases/tag/release-1.12.1-rc2 > > [6] https://github.com/apache/flink-web/pull/405 > > > >
Re: [DISCUSS] Planning Flink 1.13
Thanks for kicking off the 1.13 release cycle and volunteering as the release managers. +1 for Dawid & Guowei as the 1.13 release managers. +1 for targeting feature freeze at the end of March Thank you~ Xintong Song On Wed, Jan 13, 2021 at 10:48 PM Dawid Wysakowicz wrote: > Hi all, > With the 1.12 being released some time ago already I thought it would be > a good time to kickstart the 1.13 release cycle. > > What do you think about Guowei and me being the release managers for > Flink 1.13? We are happy to volunteer for it. > > The second topic I wanted to raise was the rough timeline for the > release. According to our usual 3 months + the release > testing/stabilising period > we should aim with the feature freeze for the end of March/beginning of > April. Does that work for everyone? > > Let me know what you think. > > Best, > Dawid > > >
Re: [DISCUSS] Releasing Apache Flink 1.10.3
Maybe I can help drive this release, if there's no one else volunteering. I've been managing the 1.11.3 and 1.12.1 releases. The bugfix release process is still warm in my mind. :) Thank you~ Xintong Song On Wed, Jan 13, 2021 at 8:09 PM Yu Li wrote: > +1 for having a bugfix release for the 1.10 branch to fix the security > issue. > > Thanks for driving the discussion Matthias! > > Minor: CVE-2020-17519 is introduced by 1.11.0 [1] so we don't need to fix > it in 1.10.3, but CVE-2020-17518 [2] is needed. > > Best Regards, > Yu > > [1] https://s.apache.org/CVE-2020-17519 > [2] https://s.apache.org/CVE-2020-17518 > > > On Wed, 13 Jan 2021 at 16:57, Till Rohrmann wrote: > > > Thanks for starting this discussion Matthias. I agree with all of you > that > > a final 1.10.3 release could be really helpful for our users. Given that > CI > > passes, it shouldn't be too much overhead either. > > > > Cheers, > > Till > > > > On Wed, Jan 13, 2021 at 9:45 AM Xingbo Huang wrote: > > > > > Thanks for starting this discussion, Matthias. > > > > > > +1 for releasing 1.10.3 as it contains a number of important fixes. > > > > > > Best, > > > Xingbo > > > > > > Xintong Song 于2021年1月13日周三 下午3:46写道: > > > > > > > Thanks for bringing this up, Matthias. > > > > > > > > Per the "Update Policy for old releases" [1], normally we do not > > release > > > > 1.10.x after 1.12.0 is released. However, the policy also says that > we > > > are > > > > "open to discussing bugfix releases for even older versions". > > > > > > > > In this case, I'm +1 for releasing 1.10.3, for the dozens of > > > non-released > > > > fixes and the security flaws. > > > > > > > > As a reminder, I'd like to bring up FLINK-20906 [2] to be backported > if > > > we > > > > are releasing 1.10.3, which updates the copyright year in NOTICE > files > > to > > > > 2021. > > > > > > > > Thank you~ > > > > > > > > Xintong Song > > > > > > > > > > > > [1] https://flink.apache.org/downloads.html > > > > [2] https://issues.apache.org/jira/browse/FLINK-20906 > > > > > > > > On Tue, Jan 12, 2021 at 7:15 PM Matthias Pohl < > matth...@ververica.com> > > > > wrote: > > > > > > > > > Hi, > > > > > I'd like to initiate a discussion on releasing Flink 1.10.3. There > > > were a > > > > > few requests in favor of this already in [1] and [2]. > > > > > > > > > > I checked the release-1.10 branch: 55 commits are not released, > yet. > > > > > Some non-released fixes that might be relevant are: > > > > > - FLINK-20218 [3] - fix "module 'urllib' has no attribute 'parse'" > > due > > > to > > > > > ProtoBuf version update > > > > > - FLINK-20013 [4] - BoundedBlockingSubpartition may leak network > > buffer > > > > > - FLINK-19252 [5] - temporary folder is not created when missing > > > > > - FLINK-19557 [6] - LeaderRetrievalListener notification upon > > ZooKeeper > > > > > reconnection > > > > > - FLINK-19523 [7] - hide sensitive information in logs > > > > > > > > > > In addition to that, we would like to include a backport for > > > > CVE-2020-17518 > > > > > and CVE-2020-17519 to cover the request in [2]. > > > > > > > > > > The travis-ci build chain for release-1.10 seems to be stable [8]. > > > > > Any thoughts on that? Unfortunately, I cannot volunteer as a > release > > > > > manager due to the lack of permissions. But I wanted to start the > > > > > discussion, anyway. > > > > > > > > > > Best, > > > > > Matthias > > > > > > > > > > [1] > > > > > > > > > > > > > > > > > > > > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/ANNOUNCE-Weekly-Community-Update-2020-44-45-td46486.html#a47610 > > > > > [2] https://issues.apache.org/jira/browse/FLINK-20875 > > > > > [3] https://issues.apache.org/jira/browse/FLINK-20218 > > > > > [4] https://issues.apache.org/jira/browse/FLINK-20013 > > > > > [5] https://issues.apache.org/jira/browse/FLINK-19252 > > > > > [6] https://issues.apache.org/jira/browse/FLINK-19557 > > > > > [7] https://issues.apache.org/jira/browse/FLINK-19523 > > > > > [8] https://travis-ci.com/github/apache/flink/builds/212749910 > > > > > > > > > > > > > > >
Re: [DISCUSS] FLIP-157 Migrate Flink Documentation from Jekyll to Hugo
The build time sounds impressive. Could you explain more what strong internationalization features it provides? Best, Jark On Thu, 14 Jan 2021 at 01:01, Ufuk Celebi wrote: > +1 to do this. I really like what you have build and the advantages to > Jekyll seem overwhelming to me. Hugo is very flexible and I've seen a few > other projects use it successfully for docs. > > On Wed, Jan 13, 2021, at 5:14 PM, Seth Wiesman wrote: > > Hi All, > > > > I would like to start a discussion for FLIP-157: Migrating the Flink docs > > from Jekyll to Hugo. > > > > This will allow us: > > > >- Proper internationalization > >- Working Search > >- Sub-second build time ;) > > > > Please take a look and let me know what you think. > > > > Seth > > > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-157+Migrate+Flink+Documentation+from+Jekyll+to+Hugo > > >
[jira] [Created] (FLINK-20965) BigDecimalTypeInfo can not be converted.
Wong Mulan created FLINK-20965: -- Summary: BigDecimalTypeInfo can not be converted. Key: FLINK-20965 URL: https://issues.apache.org/jira/browse/FLINK-20965 Project: Flink Issue Type: Bug Components: Table SQL / Runtime Affects Versions: 1.10.1 Reporter: Wong Mulan Attachments: image-2021-01-14-10-56-07-949.png, image-2021-01-14-10-59-03-656.png LegacyTypeInfoDataTypeConverter#toDataType can not correctly convert BigDecimalTypeInfo Types.BIG_DEC do not include BigDecimalTypeInfo. !image-2021-01-14-10-56-07-949.png! !image-2021-01-14-10-59-03-656.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
Re: [VOTE] Release 1.12.1, release candidate #2
+1 (non-binding) * Check the checksum and signatures * Build from source successfully * Deploy Flink on K8s natively with HA mode and check the related fixes * FLINK-20650, flink binary could work with the updated docker-entrypoint.sh in flink-docker * FLINK-20664, support service account for TaskManager pod * FLINK-20648, restore job from savepoint when using Kubernetes based HA services * Check the webUI and logs without abnormal information Best, Yang Xintong Song 于2021年1月14日周四 上午10:21写道: > +1 (non-binding) > > - verified checksums and signatures > - no binaries found in source archive > - build from source > - played with a couple of example jobs > - played with various deployment modes > - webui and logs look good > > On Thu, Jan 14, 2021 at 1:02 AM Chesnay Schepler > wrote: > > > +1 > > > > - no dependencies have been changed requiring license updates > > - nothing seems to be missing from the maven repository > > - verified checksums/signatures > > > > On 1/10/2021 2:23 AM, Xintong Song wrote: > > > Hi everyone, > > > > > > Please review and vote on the release candidate #2 for the version > > 1.12.1, > > > as follows: > > > > > > [ ] +1, Approve the release > > > [ ] -1, Do not approve the release (please provide specific comments) > > > > > > The complete staging area is available for your review, which includes: > > > * JIRA release notes [1], > > > * the official Apache source release and binary convenience releases to > > be > > > deployed to dist.apache.org [2], which are signed with the key with > > > fingerprint F8E419AA0B60C28879E876859DFF40967ABFC5A4 [3], > > > * all artifacts to be deployed to the Maven Central Repository [4], > > > * source code tag "release-1.12.1-rc2" [5], > > > * website pull request listing the new release and adding announcement > > blog > > > post [6]. > > > > > > The vote will be open for at least 72 hours. It is adopted by majority > > > approval, with at least 3 PMC affirmative votes. > > > > > > Thanks, > > > Xintong Song > > > > > > [1] > > > > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12349459 > > > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.12.1-rc2 > > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS > > > [4] > > https://repository.apache.org/content/repositories/orgapacheflink-1411 > > > [5] https://github.com/apache/flink/releases/tag/release-1.12.1-rc2 > > > [6] https://github.com/apache/flink-web/pull/405 > > > > > > > >
[jira] [Created] (FLINK-20966) Rename StreamExecIntermediateTableScan to StreamPhysicalIntermediateTableScan
godfrey he created FLINK-20966: -- Summary: Rename StreamExecIntermediateTableScan to StreamPhysicalIntermediateTableScan Key: FLINK-20966 URL: https://issues.apache.org/jira/browse/FLINK-20966 Project: Flink Issue Type: Sub-task Components: Table SQL / Planner Reporter: godfrey he Fix For: 1.13.0 -- This message was sent by Atlassian Jira (v8.3.4#803005)